Journal of System Simulation
Abstract
Abstract: Tracked robots with flippers have certain terrain adaptation capabilities. To improve the intelligent operation level of robots in complex environments, it is significant to realize the flipper autonomously control. Combining the expert experience in obstacle crossing and optimization indicators, Markov decision process(MDP) modeling of the robot's flipper control problem is carried out and a simulation training environment based on physics simulation engine Pymunk is built. A deep reinforcement learning control algorithm based on dueling double DQN(D3QN) network is proposed for controlling the flippers. With terrain information and robot state as the input and the four flippers' angle as the output, the algorithm can achieve the self-learning control of the flippers in challenging terrain. The learned flipper control policy is compared with the manual operation in Gazebo 3D simulation environment. The results show that the proposed algorithm can enable the flippers of robot to obtain adaptive adjustment ability, which helps the robot pass complex terrain more efficiently.
Recommended Citation
Pan, Hainan; Chen, Bailiang; Huang, Kaihong; Ren, Junkai; Cheng, Chuang; Lu, Huimin; and Zhang, Hui
(2024)
"Flipper Control Method for Tracked Robot Based on Deep Reinforcement Learning,"
Journal of System Simulation: Vol. 36:
Iss.
2, Article 10.
DOI: 10.16182/j.issn1004731x.joss.22-1105
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol36/iss2/10
First Page
405
Last Page
414
CLC
TP242.6
Recommended Citation
Pan Hainan, Chen Bailiang, Huang Kaihong, et al. Flipper Control Method for Tracked Robot Based on Deep Reinforcement Learning[J]. Journal of System Simulation, 2024, 36(2): 405-414.
DOI
10.16182/j.issn1004731x.joss.22-1105
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons