•  
  •  
 

Journal of System Simulation

Abstract

Abstract: Aimed at the difficulty of feature point extraction in mobile robots with drastic changes in illumination or sparse texture scenes under large-angle view motion, difficulty in matching features at extreme angles leads to large errors in Epipolar Geometry calculations, a fusion of an improved graph neural network based visual SLAM algorithm (GNN-SLAM) is proposed. The priori location estimation feature extraction network is proposed to achieve fast and uniform detection and description of image feature points by a priori location estimation and to construct real and accurate feature point information. The graph attention mechanism feature matching network is proposed to aggregate feature point information through message passing graph neural network, and then use self and joint attention mechanism for before and after image frame weighted feature matching. The feature extraction and matching map neural network is fused with the back-end nonlinear optimization, closed-loop correction, and local mapping modules of the ORB-SLAM2 system to propose a complete monocular vision GNNSLAM system. The experimental results show that: compared with the ORB-SLAM2 algorithm on the KITTI dataset, the absolute trajectory error of this algorithm is reduced by 37.59%, and the relative pose error is reduced by 19.67%.

First Page

1043

Last Page

1060

CLC

TP242

Recommended Citation

Liu Jinhui, Chen Mengyuan, Han Pengpeng, et al. A Graph Neural Network Visual SLAM Algorithm for Large-angle View Motion[J]. Journal of System Simulation, 2024, 36(5): 1043-1060.

Corresponding Author

Chen Mengyuan

DOI

10.16182/j.issn1004731x.joss.23-0053

Share

COinS