Journal of System Simulation
Abstract
Abstract: Aiming at the offloading and execution of delay-constrained computing tasks for internet of vehicles in edge computing, a task scheduling method based on deep reinforcement learning is proposed. In multi-edge server scenario, a software-defined network-aided internet of vehicles task offloading system is built. On this basis, the task scheduling model of vehicle computation offloading is given. According to the characteristics of task scheduling, a scheduling method based on an improved pointer network is designed. Considering the complexity of task scheduling and computing resource allocation, the deep reinforcement learning algorithm is used to train the pointer network. The vehicle offloading tasks is scheduled by the trained pointer network. The simulation results show that with the same computing resources of edge servers, the proposed method is better than other methods in processing the number of delay-constrained computing tasks, and effectively improves the service capability of the internet of vehicles task offloading system.
Recommended Citation
Ju, Xiang; Su, Shengchao; Xu, Chaojie; and He, Beibei
(2023)
"Task Scheduling for Internet of Vehicles Based on Deep Reinforcement Learning in Edge Computing,"
Journal of System Simulation: Vol. 35:
Iss.
12, Article 5.
DOI: 10.16182/j.issn1004731x.joss.22-0841
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol35/iss12/5
First Page
2550
Last Page
2559
CLC
TP391.9
Recommended Citation
Ju Xiang, Su Shengchao, Xu Chaojie, et al. Task Scheduling for Internet of Vehicles Based on Deep Reinforcement Learning in Edge Computing[J]. Journal of System Simulation, 2023, 35(12): 2550-2559.
DOI
10.16182/j.issn1004731x.joss.22-0841
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons