•  
  •  
 

Journal of System Simulation

Abstract

Abstract: Tracked robots with flippers have certain terrain adaptation capabilities. To improve the intelligent operation level of robots in complex environments, it is significant to realize the flipper autonomously control. Combining the expert experience in obstacle crossing and optimization indicators, Markov decision process(MDP) modeling of the robot's flipper control problem is carried out and a simulation training environment based on physics simulation engine Pymunk is built. A deep reinforcement learning control algorithm based on dueling double DQN(D3QN) network is proposed for controlling the flippers. With terrain information and robot state as the input and the four flippers' angle as the output, the algorithm can achieve the self-learning control of the flippers in challenging terrain. The learned flipper control policy is compared with the manual operation in Gazebo 3D simulation environment. The results show that the proposed algorithm can enable the flippers of robot to obtain adaptive adjustment ability, which helps the robot pass complex terrain more efficiently.

First Page

405

Last Page

414

CLC

TP242.6

Recommended Citation

Pan Hainan, Chen Bailiang, Huang Kaihong, et al. Flipper Control Method for Tracked Robot Based on Deep Reinforcement Learning[J]. Journal of System Simulation, 2024, 36(2): 405-414.

Corresponding Author

Huang Kaihong

DOI

10.16182/j.issn1004731x.joss.22-1105

Share

COinS