•  
  •  
 

Journal of System Simulation

Abstract

Abstract: To address the issue of current pedestrian detectors, which struggle to extract complete features in occlusion-heavy environments and consequently have low detection accuracy. A novel adaptive multiscale feature pyramid network is proposed. A multi-scale feature enhancement module (MFEM) is developed. It captures the visible area of pedestrians at different scales through a multi-branch network with different receptive fields. An AFM (adaptive fusion module) is proposed. It calculates the importance of different pixels by optimizing the mean variance at the spatial and feature levels. It enhances the texture and semantic features of pedestrians and fuses the features of different scales more efficiently. In addition, these two modules can be built as a complete feature pyramid network for further downstream tasks. A non-maximum suppression algorithm named Soft-SNMS (soft-set-non maximum suppression) is proposed. When predicting all the candidate boxes in the proposal, it retains high-quality candidate boxes through different decay functions. In addition, it can remove useless candidate boxes and improve the efficiency of model training. The proposed method is tested on the CrowdHuman and WiderPerson datasets, respectively. It achieves an improvement of 4.04% and 1.51% in the AP metric compared to the original method. The results indicate that our method can effectively improve the detection accuracy of pedestrian targets in occluded environments.

First Page

1222

Last Page

1233

CLC

TP391

Recommended Citation

Zhou Huaping, Wu Tao, Sun Kelei. Adaptive Multi-scale Feature Pyramid Network for Occlusion Pedestrian Detection[J]. Journal of System Simulation, 2025, 37(5): 1222-1233.

Corresponding Author

Wu Tao

DOI

10.16182/j.issn1004731x.joss.24-0018

Share

COinS