Journal of System Simulation
Abstract
Abstract: In order to solve the intercell scheduling problem of dynamic arrival of machining tasks and realize adaptive scheduling in the complex and changeable environment of the intelligent factory, a scheduling method based on a deep Q network is proposed. A complex network with cells as nodes and workpiece intercell machining path as directed edges is constructed, and the degree value is introduced to define the state space with intercell scheduling characteristics. A compound scheduling rule composed of a workpiece layer, unit layer, and machine layer is designed, and hierarchical optimization makes the scheduling scheme more global. Since double deep Q network (DDQN) still selects sub-optimal actions in the later stage of training, a search strategy based on the exponential function is proposed. Through simulation experiments of different scales, it is verified that the proposed method can deal with the changeable dynamic environment and quickly generate an optimal scheduling scheme.
Recommended Citation
Ni, Jing and Ma, Mengke
(2023)
"Intercell Dynamic Scheduling Method Based on Deep Reinforcement Learning,"
Journal of System Simulation: Vol. 35:
Iss.
11, Article 5.
DOI: 10.16182/j.issn1004731x.joss.22-0666
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol35/iss11/5
First Page
2345
Last Page
2358
CLC
TP18; TP391
Recommended Citation
Ni Jing, Ma Mengke. Intercell Dynamic Scheduling Method Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2023, 35(11): 2345-2358.
DOI
10.16182/j.issn1004731x.joss.22-0666
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons