Reinforcement Learning-Based Framework for Optimal Task Scheduling in Cloud Computing

Main Article Content

KrishnaRao Patwari
Raghvendra Kumar
J.S.V.R.S. Sastry
J.S. Ishaani Priyadarshini
Tran Ha Thanh
Vu Trong Hieu

Abstract

Cloud computing enables the execution of large-scale computing tasks in a pay-per-use manner, allowing users worldwide to submit diverse workloads to cloud infrastructures. In this context, efficient task scheduling plays a crucial role in ensuring quality of service and optimal resource utilization. Existing cloud task scheduling approaches are largely based on heuristic or learning-based methods; however, heuristic approaches lack adaptability to dynamic runtime conditions.


To address these limitations, this paper proposes a reinforcement learning–based framework for optimal task scheduling in cloud computing. The scheduling problem is modeled as a Markov Decision Process (MDP) and solved using a reinforcement learning–based optimal task scheduling algorithm (RL-OTS) that learns scheduling policies through continuous interaction between the environment and the decision-making agent. The proposed framework dynamically selects scheduling actions based on the observed system state to improve scheduling efficiency under varying workload conditions.


An empirical evaluation conducted using heterogeneous workloads of 2000, 5000, and 10000 jobs demonstrates that the proposed RL-OTS algorithm consistently outperforms several state-of-the-art scheduling methods, achieving success rates of 81.10%, 80.10%, and 80.20%, respectively. These results highlight the effectiveness of reinforcement learning for adaptive and intelligent task scheduling in cloud computing environments.

Article Details

Section
Articles