Journal of System Simulation
Abstract
Abstract: To realize multi-agent intelligent planning and target tracking in complex naval battlefield environment, the work focuses on agents (submarine or warship), and proposes a simulation method based on reinforcement learning algorithm called Deep Q Network (DQN). Two neural networks with the same structure and different parameters are designed to update real and predicted Q values for the convergence of value functions. An ε-greedy algorithm is proposed to design an action selection mechanism, and a reward function is designed for the naval battlefield environment to increase the update velocity and generalization ability of Learning with Experience Replay (LER). Simulation results show that compared with existing path routing algorithms and multi-agent path routing algorithms, each agent can effectively avoid obstacles in unfamiliar environments and achieve more intelligent path planning and target tracking through a certain number of steps of learning.
Recommended Citation
Huang, Xiaodong; Yuan, Haitao; Jing, Bi; and Tao, Liu
(2021)
"DQN-based Path Planning Method and Simulation for Submarine and Warship in Naval Battlefield,"
Journal of System Simulation: Vol. 33:
Iss.
10, Article 17.
DOI: 10.16182/j.issn1004731x.joss.21-0229
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol33/iss10/17
First Page
2440
Revised Date
2021-04-15
DOI Link
https://doi.org/10.16182/j.issn1004731x.joss.21-0229
Last Page
2448
CLC
TP311.5;TP391
Recommended Citation
Huang Xiaodong, Yuan Haitao, Bi Jing, Liu Tao. DQN-based Path Planning Method and Simulation for Submarine and Warship in Naval Battlefield[J]. Journal of System Simulation, 2021, 33(10): 2440-2448.
DOI
10.16182/j.issn1004731x.joss.21-0229
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons