Journal of System Simulation
Abstract
Abstract: Aiming at the autonomous decision-making difficulty of mobile picking robots in random and changeable complicated path environment during field operations, an autonomous obstacle avoidance path planning method based on deep reinforcement learning is propose. By setting the state space and action space and using the artificial potential field method to design the reward function, an obstacle penalty coefficient setting method based on collision cone collision avoidance detection is proposed to improve the autonomous collision avoidance ability. A virtual simulation system is constructed, in which the learning and training of the mobile picking robot is carried out and verified by experiments through the distributed proximal policy optimization(DPPO). Simulation results show that the system can quickly and stably control the virtual mobile picking robot to autonomously avoid obstacles and obtain a better operating path, which can provide theoretical and technical support for the autonomous navigation of picking robot.
Recommended Citation
Lin, Junqiang; Wang, Hongjun; Zou, Xiangjun; Zhang, Po; Li, Chengen; Zhou, Yipeng; and Yao, Shujie
(2023)
"Obstacle Avoidance Path Planning and Simulation of Mobile Picking Robot Based on DPPO,"
Journal of System Simulation: Vol. 35:
Iss.
8, Article 6.
DOI: 10.16182/j.issn1004731x.joss.22-0487
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol35/iss8/6
First Page
1692
Last Page
1704
CLC
TP242;TP18
Recommended Citation
Lin Junqiang, Wang Hongjun, Zou Xiangjun, et al. Obstacle Avoidance Path Planning and Simulation of Mobile Picking Robot Based on DPPO[J]. Journal of System Simulation, 2023, 35(8): 1692-1704.
DOI
10.16182/j.issn1004731x.joss.22-0487
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons