•  
  •  
 

Journal of System Simulation

Abstract

Abstract: In order to solve the intercell scheduling problem of dynamic arrival of machining tasks and realize adaptive scheduling in the complex and changeable environment of the intelligent factory, a scheduling method based on a deep Q network is proposed. A complex network with cells as nodes and workpiece intercell machining path as directed edges is constructed, and the degree value is introduced to define the state space with intercell scheduling characteristics. A compound scheduling rule composed of a workpiece layer, unit layer, and machine layer is designed, and hierarchical optimization makes the scheduling scheme more global. Since double deep Q network (DDQN) still selects sub-optimal actions in the later stage of training, a search strategy based on the exponential function is proposed. Through simulation experiments of different scales, it is verified that the proposed method can deal with the changeable dynamic environment and quickly generate an optimal scheduling scheme.

First Page

2345

Last Page

2358

CLC

TP18; TP391

Recommended Citation

Ni Jing, Ma Mengke. Intercell Dynamic Scheduling Method Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2023, 35(11): 2345-2358.

DOI

10.16182/j.issn1004731x.joss.22-0666

Share

COinS