•  
  •  
 

Journal of System Simulation

Abstract

Abstract: To enhance the efficiency of flexible job shop scheduling, this paper develops a Markov decision process with specific constraints tailored to the scheduling problem. A cooperative agent reinforcement learning method is proposed to solve the problem of concurrent selection of workpieces and machines. During the construction of the Markov decision process, a disjunctive graph is introduced to represent the state characteristics. Two agents are introduced to select the workpieces and machines. The reward parameters governing the entire scheduling process are established by predicting variations in the minimum-maximum completion time across different time points. A GIN(graph isomorphic network) graph neural network is embedded in the solving procedure to extract the relevant state information. Encoder and decoder components are respectively set for the workpiece and machine agent to output two action strategies. The PPO(proximal policy optimization) algorithm and D3QN algorithm are used to train the decision network parameters for these agents. Algorithm hyperparameters, determined through the orthogonal experiment method, are compared with standard benchmarks and those in existing literature. The results demonstrate the significant superiority of the proposed method in solving the flexible job shop scheduling problem, further substantiating the feasibility and effectiveness of the method.

First Page

2699

Last Page

2711

CLC

TP278

Recommended Citation

Li Jian, Li Huankun, He Pengbo, et al. Flexible Job Shop Scheduling Method Based on Collaborative Agent Reinforcement Learning Algorithm[J]. Journal of System Simulation, 2024, 36(11): 2699-2711.

DOI

10.16182/j.issn1004731x.joss.23-0978

Share

COinS