Inside FLAME methodology

So you’ve got a great idea for a new kind of digital media service – and even a small-scale prototype running in your computer lab – but how are you going to convince consumers and investors that this thing is really going to create impact at city scale? Welcome to the FLAME methodology: a principled process that will expedite the evolution of your digital innovation to production ready media service – with a little help from the FLAME platform!

In this blog by Dr. Simon Crowle from the IT Innovation Centre, we take a look inside the FLAME methodology, which takes an experiment driven, knowledge based approach to transform the delivery and user experience (UX) of your digital media services. You will learn more about the steps you can take to integrate your media service with the FLAME platform and how to measure impact. We’ll finish by thinking about the knowledge you create (through the FLAME platform) and its value to others in your FMI ecosystem.

FLAME: for the media service providers

If you’re in the business of providing online digital media services it’s quite likely you’ve already run up against the complexities and costs associated with delivering your service at scale. Keeping time impoverished users engaged with your service and content can be challenging and expensive. Meeting demand for a highly responsive, personalized and localized interactive experience is not trivial. Perhaps you simply throw huge resources at the problem and live with the pain of monitoring, maintaining and paying for a legion of micro-services? Perhaps not.

Enter the FLAME methodology: an approach to designing, deploying and evaluating dynamically distributed, scalable digital media service functions across an intelligent network. Whether you are creating an entirely new service or thinking about transforming an existing one, the FLAME methodology offers a rational pathway to understanding and controlling key performance indicators (KPIs) that impact UX. In the next section we will take a high-level look at how the FLAME methodology and platform work hand-in-hand to create a high value, lower cost solution to delivering your digital media service.

Understanding demand and response

When deploying a digital media service on the FLAME platform, you are placing it in a responsive environment that reacts (using behaviours you have defined) in real-time to metrics of interest to you. Consider this behaviour as a continuous cycle of optimization in which your target UX is negotiated with the most efficient use of available platform resources.

In the figure above we illustrate how user driven engagement with a media service influences the demand for service functions (SF) used to deliver it. This in turn will drive changes in the response characteristics of the SF (for example, resource usage and response time). The FLAME platform continuously monitors and evaluates metrics that reflect this demand, response and resource usage. When pre-defined conditions are met (such as a drop below a response time threshold), the FLAME platform will intelligently execute control actions to maintain your requested quality of service, and thus sustain high quality user experiences for your clients.

Knowledge and value generation

Engaging your media service with the FLAME platform and methodology will result in the progressive generation of knowledge and value for you and others in your eco-system. This process starts with an initial packaging of your service and its deployment on a FLAME integration platform: a virtualized, small-scale hosting environment that provides all the essential elements of the FLAME platform.

In FLAME, a media service is defined as a service function chain (SFC). An SFC is  bundled as a collection of virtual machines or containers and a TOSCA specification that defines the resource specification and policies for placement of SFs within the network, within the Cloud or edge computing resources.  Once your service function chain (SFC) has been verified as a ‘go’, you can then start to run formal experiments (these are tests that mimic user interactions with you media service using simulation techniques). The results of these will pave the way to conducting real user trials at small and then large scale. We will touch on each of these phases next.

Your media service deployed on a FLAME platform

In this first step you will verify that your packaged SFC deploys, initializes and serves correctly using FLAME’s platform services and software defined network (SDN). Let’s imagine that such an SFC contains service functions ‘A’, ‘B’ and ‘C’ and that, for this initial deployment, you want two instances (or service function endpoints [SFE]) of SF ‘C’ running along-side an instance of ‘A’ and ‘B’.

In the figure above we visualize (at a high level) these three service functions and their realizations (as SFEs) in FLAME clusters (each of which represent a different physical location in a city). Each SFE is connected to their SFE dependents via the FLAME service function routing. Their pattern of deployment is described in a TOSCA template and that includes URLs to VM or container images that implement parts of the overall media service. You’ll use the FLAME orchestrator to coordinate the deployment of your media service and the FLAME Cross Layer Management and Control (CLMC) to aggregate and analyse service performance metrics generated by your software and the FLAME platform itself. At run-time you’ll get the low-down on how your service is performing via the CLMC – rules can also be added to automatically fire when your KPIs breach thresholds and action is required to affect change in the platform.

Experimentation

Next you will start building knowledge through experimental testing. Want to find out how your service theoretically stands up against a sudden surge of thousands of requests? Most likely you’ll have more specific questions relating to your processing activities as well. Perhaps you want to know what the minimal compute resource requirements are to sustain ‘X’ number of parallel media transcodes at ‘Y’ frames per second? The FLAME platform has been designed from the ground up to facilitate the instrumentation and analysis of service function resource usage and performance. When you ‘plug in’ your service functions as FLAME ready VMs, you get a near real-time view of how they are performing through use of the CLMC experimentation service.

The FLAME CLMC offers experimenters a perspective on a dynamically changing media service platform that is continuously responding to user demand. Through the CLMC you’ll have the ability to define and monitor KPIs that are relevant to your service. At run-time, you can ‘slice and dice’ the data you collect using FLAME’s powerful media service information model and graph based analysis support. You can ask common questions such as: ‘what is the average response time of my web servers on the edge’? But you can get much deeper. For example: want to know what the average round-trip time for request payloads over 500k reaching servers running at more than 75% of their RAM capacity is? Yep, you can do that.

At the end of your platform experiments it’s likely you’ll have collected quite a lot of data – sufficient to analyse and start drawing tentative hypotheses on the factors that impact quality of service (QoS) under specific deployment configurations and (simulated) levels of demand. As a result you may tweak your service function configuration with a view to meeting a particular demand or cost base-line. You might even specify some initial rules that define thresholds to trigger the addition or removal of SF resources in response to user behaviours.

Suddenly, you’re ready to test your innovation in the wild!

Real-world trials (small scale)

So what is it really like to take part in this digital media experience you have so carefully crafted and tested? There is only one way to find out: get it out there in the real world and see what happens when people use it. You’ll begin with a small trial group perhaps initially only testing limited aspects of your UX. The metric collection you used previously through experimentation can also be re-applied here. In addition to this, since you’re working with small numbers of participants, you have the opportunity to collect detailed, qualitative data on individuals’ perceived quality of experience (QoE). This is an excellent time to test your hypothetical model of QoS to QoE. Did real-life demand and service performance align with corresponding levels in your experimentation? How far can you push resources down before UX becomes unacceptable? Are the subjective reports of UX in-line with your expectations? At what point does increasing resources available to your service function(s) no longer result in a cost-effective improvement of UX?

Urban scale trials

Your small scale trials have led to a deeper understanding of how well your digital media service meets real-world demand. Moreover, through the analysis of your qualitative data and the corresponding changes in QoS metrics, you have generated some knowledge that has empowered you to propose and trial the FLAME platform and, in theory, sustain a high quality UX at urban scale – for real. For example, perhaps you’ve identified that a major bottleneck impacting overall service latency is disk I/O in the cloud. You conclude that under certain usage conditions, rapidly caching in and out selected content to in-memory locations around FLAME edge compute devices reduces latency. But does it work at scale in the real world? Did that change in the way your deliver that unique UX using FLAME really engage more users and actually cost less in network and storage costs?

The FLAME platform has been designed from the start to not only answer this sort of question at urban-scale, but also to provide the mechanisms to deliver solutions. Using FLAME, when you evaluate you’ll see the platform dynamically respond to demand for your service through facilitating behaviours such as net-level indirection, HTTP multi-cast response delivery and efficient routing to dynamic content caches. The knowledge you have created to optimize user experience will be played out and validated at scale and in a real-world, city context: a truly unique and valuable outcome. It’s a big win for you – and also others in the FLAME stakeholder eco-system. Investors in your media service get more confidence in your offering and an understanding of the resources needed for it to succeed. City infrastructure and community stakeholders are offered insights into how the city and its citizens can be enriched. Your win is also a win for the FLAME platform and its proposition as a vision of the future media internet.


Post from Simon Crowle, IT Innovation