•  
  •  
 

Journal of System Simulation

Abstract

Abstract: In recent years, human action recognition based on skeleton data has received a lot of attention in the fields of computer vision and human-computer interaction. Most of the existing methods focus on modeling the skeleton points in the original 3D coordinate space. However, skeleton points ignore the physical chain structure of the human body itself, which makes it difficult to portray the local correlation of human motion. In addition, due to the diversity of camera views, it is difficult to explore the comprehensive representation of actions in different views under the original point-based 3D space. In view of this, this paper proposed an action recognition method based on skeleton edge information in the projection subspace. The method defined skeleton edge information combined with the body's own connection for capturing the spatial characteristics of the action. The direction and size information of skeleton edge motion was introduced on the basis of the skeleton edge information for capturing the temporal characteristics of the action. The 2D projection subspace was used for action characterization under different subspace perspectives. A suitable feature fusion strategy was explored, and the above features were extracted comprehensively through the improved CNN framework. Experimental results on two challenging large datasets NTU-RGB+D 60 (evaluation metrics are cross-subject and cross-view) and NTU-RGB+D 120 (evaluation metrics are cross-subject and cross-set) show that compared with the benchmark method, the proposed method improves the accuracy under the four metrics by 3.2%, 2.4%, 3.1%, and 5.8%, respectively.

First Page

555

Last Page

563

CLC

TP391.4

Recommended Citation

Su Benyue, Zhang Peng, Zhu Bangguo, et al. Human Action Recognition Based on Skeleton Edge Information Under Projection Subspace[J]. Journal of System Simulation, 2024, 36(3): 555-563.

DOI

10.16182/j.issn1004731x.joss.22-1234

Share

COinS