•  
  •  
 

Journal of System Simulation

Abstract

Abstract: The existing human action recognition methods focus more on the translation information such as the coordinates and displacements of skeleton structure, and pay less attention to the motion trend of skeleton structure and the rotation information representing the motion direction of joints and bones. A spatio-temporal convolutional neural network method combining the rotation angle coding in spherical space is introduced. The angle information with scale invariance is obtained by mapping the human action in three-dimensional spherical space, and the dynamic angular velocity information is extracted as the angle code to represent the rotation information of joints and bones in the action trajectory. A spatiotemporal feature extraction and co-occurrence module(STCN) is constructed to better capture the spatiotemporal features of data. A suitable fusion strategy is utilized to fuse the translation features and rotation features. The experimental results show that the rotation angle coding benefits the accuracy improvement of motion representation and the effectiveness of the spatio-temporal feature extraction and co-occurrence module.

First Page

1433

Last Page

1441

CLC

TP391.9

Recommended Citation

Su Benyue, Zhu Bangguo, Guo Mengjuan, et al. Fusing Rotation Angle Coding in Spherical Space for Human Action Recognition[J]. Journal of System Simulation, 2024, 36(6): 1433-1441.

DOI

10.16182/j.issn1004731x.joss.23-1263

Share

COinS