•  
  •  
 

Journal of System Simulation

Abstract

Abstract: In autonomous driving simulation and industrial virtual reality simulation, there is a high demand for accuracy and robustness in 3D human body modeling. However, current joint-based human modeling approaches suffer from issues such as continuous modeling jitter, local distortion, and poor adaptability to occlusion, which degrade model quality and limit the development of practical applications such as intelligent driving and digital factories. To address these challenges, this paper proposes a multi-view vision-based inverse kinematics 3D human modeling method using a vector quantized variational autoencoder(IK-VQ-VAE). By integrating joint training with an automatic variational gradient descent approach, the proposed method achieves multi-view temporal fusion and enhanced occlusion adaptability, resulting in a more robust and realistic human pose reconstruction.Experiments conducted on the public Shelf dataset demonstrate that the proposed method achieves a maximum improvement of 23.7% and an average improvement of 8.7% in the percentage of correct parts (PCP)compared with recent optimized approaches. Qualitative results further confirm that our method produces superior 3D human modeling performance compared to existing methods.

First Page

99

Last Page

111

CLC

TP391.9

Recommended Citation

Fang Guoyu, Li Yanze, Chen Kai, et al. Inverse Kinematics 3D Human Modeling Simulation based on Multi-view Vision[J]. Journal of System Simulation, 2026, 38(1): 99-111.

Corresponding Author

Chen Kai

DOI

10.16182/j.issn1004731x.joss.25-0742

Share

COinS