Journal of System Simulation
Abstract
Abstract: Video action recognition is an important part of intelligent video analysis. In recent years, deep learning methods, especially the two-stream convolutional neural network achieved the state-of-the-art performance. However, most methods simply use uniform sampling to get frames, which may cause the loss of information in sampling interval. We propose a segmentation method and a key-frame extraction method for video action recognition, and combine them with a multi-temporal-scale two-stream network. Our framework achieves a 94.2% accuracy at UCF101 split1, which is the same as the state-of-the-art method’s performance.
Recommended Citation
Li, Mingxiao; Geng, Qichuan; Hong, Mo; Wei, Wu; and Zhong, Zhou
(2019)
"Video Action Recognition Based on Key-frame,"
Journal of System Simulation: Vol. 30:
Iss.
7, Article 44.
DOI: 10.16182/j.issn1004731x.joss.201807044
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol30/iss7/44
First Page
2787
DOI Link
https://doi.org/10.16182/j.issn1004731x.joss.201807044
Last Page
2793
CLC
TP391.4
Recommended Citation
Li Mingxiao, Geng Qichuan, Mo Hong, Wu Wei, Zhou Zhong. Video Action Recognition Based on Key-frame[J]. Journal of System Simulation, 2018, 30(7): 2787-2793.
DOI
10.16182/j.issn1004731x.joss.201807044
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons