Reinforcement-Learning-based Foresighted Task Scheduling in Cloud Computing
With the apperance of cloud computing, users receive computing resources according to pay as you go of cloud service provider. An optimized scheduling approach for mapping all the tasks to the resources is an essential problem due to the limitations and dynamics of resources for requests which vary during the time. This solution may lead to improvement of system's efficiency. There are different methods for cloud computing scheduling with different parameters such as response time, makespan, waiting time, energy consumption, cost, utilization rate, and load balancing. But many of these methods are not suitable for improving scheduling performance in a condition that users requests change during the time. So in this thesis a scheduling method based on reinforcement learning is proposed. Adopting with environment conditions and responding to unsteady requests, reinforcement learning can cause a long-term increase in system's performance. The results show that this proposed method can not only reduce the response time and makespan but also increase resource efficiency as a minor goal. Our proposed illustrates improvements in response time for 49.52 Q-sch algorithms, respectively.
READ FULL TEXT