Basic information

Title: Intelligent control of edge compute resources within MEC environments
Acronym: SmartMEC
Company: Modio | www.modio.io – info@modio.io
Trial/Replicator: Trial
Type: SME

Where: Bristol
When (time-plan): June to December 2019 (6 months)

 

Modio logo

 

 

Project summary

Modio is developing a commercial solution for computationally-intelligent control of edge compute resources for Mobile Edge Computing (MEC) environments. Our Value Proposition is formed around our technical innovation to be able to cope with large variations of resources’ demand without avoiding either over allocation of resources that leads to excessive operational costs to the service provider or under allocation that leads to clients’ SLA violations of the offered services.

However, although the obtained results have shown that our prototype Proof of Concept (PoC) implementation has a promising potential for commercial exploitation, additional work is still needed to further enhance our PoC with an additional Deep Reinforcement Learning method, tailored to a distributed MEC environment, where resource demanding sessions, e.g. WebRTC sessions, can be intelligently served by the most appropriate edge Point of Presence (PoP) within a distributed MEC environment.

The rationale behind our approach is that MEC can significantly improve the QoS of video streaming services; however, the resources at edge nodes are still limited and should be allocated economically efficiently, as both stated in the academic literature but also as we have experimentally validated when we deployed heavy WebRTC workloads on a single cloud deployment, observing that the single cloud was unable to serve all workload resource demands.

Within our FLAME experiment, we will tackle the problem identified above: How to allocate compute resources across multiple edge devices within a MEC environment, such is the FLAME offering. Our solution to this problem involves addressing the following objectives, which will be pursued within our FLAME experiment.

  • Objective 1. Enhancement of our existing AI-based resource control methodology for a distributed environment in order to derive policies dictating where resource demanding WebRTC sessions should be placed across time and space, so that they are served by the most appropriate edge Computing Node in a MEC environment. To solve this problem, we will leverage our commercial Qiqbus streaming analytics platform which we will enhance with the implementation of a suitable Deep Reinforcement Learning method.
  • Objective 2. Emulate a realistic usage scenario of our envisaged commercial solution by using FLAME’s capabilities, such its CLMC interface and its Media quality analysis, Differentiate Services and Localize Traffic platform capabilities , which shall enable us to conduct a truly MEC video delivery experiment, where video servers with the same local content shall be deployed across distinct edge PoPs within a real MEC environment.
  • Objective 3. Experiment over FLAME’s environment within the city of Bristol, using resources from its OpenStack Compute Nodes with the goal to iteratively refine and fine tune our Deep Reinforcement Learning implementation within Qiqbus and finally be in position to compile a valuable demonstrator to showcase to VCs.
Qiqbus assisting FLAME to optimally direct the WebRTC client (we also consider DASH clients for more fine-grained decisions) to the most appropriate edge node
Qiqbus assisting FLAME to optimally direct the WebRTC client (we also consider DASH clients for more fine-grained decisions) to the most appropriate edge node

We will leverage the FLAME platform and its capabilities for monitoring and governing the behaviour of the multi PoP edge infrastructure at Bristol. Our commercial software, Qiqbus, shall implement the decision-making mechanisms of service instantiation and placement. It will co-work with FLAME by (i) receiving monitoring information from FLAME CLMC, (ii) using the monitoring information to train a Deep Reinforcement Learning Network implemented within Qiqbus, (iii) use the trained network to derive policies providing the near-optimal computing node per new WebRTC client while minimizing the mean delivery time of its associated workload; and, (iv) leverage FLAME’s platform capabilities to implement the policy by sending the corresponding WebRTC client to the edge PoP dictated in the derived by our engine policy.

This experiment shall allow us to move from a Proof of Concept of an intelligent resource control implementation, tailored to a single cloud, to a Demonstrator of an intelligent resource implementation applicable to a MEC environment, which will be validated with real end users in the city of Bristol as depicted in the figure below.

A snapshot of our validation scenario in our planned small trial at Bristol (depicting that our solution may direct the user to receive a video from a computing node not collocated with the AP with the strongest signal
A snapshot of our validation scenario in our planned small trial at Bristol (depicting that our solution may direct the user to receive a video from a computing node not collocated with the AP with the strongest signal