Journal of System Simulation
Abstract
Abstract: Vision navigation technology of scene matching needs hardware to measure the distance and attitude of the camera. A method of position and attitude estimation based on the combination of feature points in the calibration area is proposed. The software can get the camera position and attitude of the real-time image by selecting the best set of Scale-Invariant Feature Transform(SIFT) feature matching points of the reference image calibration area of the real-time image, and calculating the local coordinates of the ground of SIFT feature matching points based on the triangle internal linear interpolation method, using space resection, avoids the defect of hardware measuring camera's distance and attitude, and expands the application scope of vision navigation technology of scene matching. The experimental results show that the position and attitude of real-time image calculated by this method is close to the real.
Recommended Citation
Peng, Cai; Shen, Chaoping; and Li, Hongyan
(2021)
"Position and Attitude Estimation Based on Combination Matching in Calibration Area,"
Journal of System Simulation: Vol. 33:
Iss.
7, Article 15.
DOI: 10.16182/j.issn1004731x.joss.20-0218
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol33/iss7/15
First Page
1638
Revised Date
2020-06-03
DOI Link
https://doi.org/10.16182/j.issn1004731x.joss.20-0218
Last Page
1646
CLC
TP391
Recommended Citation
Cai Peng, Shen Chaoping, Li Hongyan. Position and Attitude Estimation Based on Combination Matching in Calibration Area[J]. Journal of System Simulation, 2021, 33(7): 1638-1646.
DOI
10.16182/j.issn1004731x.joss.20-0218
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons