•  
  •  
 

Journal of System Simulation

Abstract

Abstract: To realize multi-agent intelligent planning and target tracking in complex naval battlefield environment, the work focuses on agents (submarine or warship), and proposes a simulation method based on reinforcement learning algorithm called Deep Q Network (DQN). Two neural networks with the same structure and different parameters are designed to update real and predicted Q values for the convergence of value functions. An ε-greedy algorithm is proposed to design an action selection mechanism, and a reward function is designed for the naval battlefield environment to increase the update velocity and generalization ability of Learning with Experience Replay (LER). Simulation results show that compared with existing path routing algorithms and multi-agent path routing algorithms, each agent can effectively avoid obstacles in unfamiliar environments and achieve more intelligent path planning and target tracking through a certain number of steps of learning.

First Page

2440

Revised Date

2021-04-15

Last Page

2448

CLC

TP311.5;TP391

Recommended Citation

Huang Xiaodong, Yuan Haitao, Bi Jing, Liu Tao. DQN-based Path Planning Method and Simulation for Submarine and Warship in Naval Battlefield[J]. Journal of System Simulation, 2021, 33(10): 2440-2448.

Corresponding Author

Bi Jing,

DOI

10.16182/j.issn1004731x.joss.21-0229

Share

COinS