•  
  •  
 

Journal of System Simulation

Abstract

Abstract: In response to the high coupling of task interaction and many influencing factors in task analysis, a task analysis method based on sequence decoupling and deep reinforcement learning (DRL) is proposed, which can achieve task decomposition and task sequence reconstruction under complex constraints. The method designs an environment for deep reinforcement learning based on task information interaction, while improving the SumTree algorithm based on the difference between the loss functions of the target network and the evaluation network, achieving the priority evaluation among tasks. The activation function operation mechanism is introduced into the deep reinforcement learning network, followed by extracting the task features, putting forward the greedy activation factor, optimizing the parameters of the deep neural network, and determining the optimal state of the intelligent agent, thus facilitating its state transition. The multi-objective task execution sequence diagram is generated through experience replay. The simulation experiment results show that the method can generate executable task diagrams under optimal scheduling; and it has better adaptivity to dynamic scenarios compared with static scenarios, showing a promising prospect of widespread application in domain task planning.

First Page

1670

Last Page

1681

CLC

E917; TP391

Recommended Citation

Gong Xue, Peng Pengfei, Rong Li, et al. Task Analysis Methods Based on Deep Reinforcement Learning[J]. Journal of System Simulation, 2024, 36(7): 1670-1681.

Corresponding Author

Rong Li

DOI

10.16182/j.issn1004731x.joss.23-0443

Share

COinS