Reinforcement Learning Based Orchestration for Elastic Services
Due to the highly variable execution context in which edge services run, adapting their behavior to the execution context is crucial to comply with their requirements. However, adapting service behavior is a challenging task because it is hard to anticipate the execution contexts in which it will be deployed, as well as assessing the impact that each behavior change will produce. In order to provide this adaptation efficiently, we propose a Reinforcement Learning (RL) based Orchestration for Elastic Services. We implement and evaluate this approach by adapting an elastic service in different simulated execution contexts and comparing its performance to a Heuristics based approach. We show that elastic services achieve high precision and requirement satisfaction rates while creating an overhead of less than 0.5 to the overall service. In particular, the RL approach proves to be more efficient than its rule-based counterpart; yielding a 10 to 25 precision while being 25
READ FULL TEXT