Human Action Recognition Based on Multiple Feature Fusion

Human Action Recognition Based on Multiple Feature Fusion

R.J.Ma H.S. Zhang 

Department of Electronic and Information, Northwestern Polytechnical University China, 127 West Youyi RoadXi’an Shaanxi

Corresponding Author Email:;
15 March 2017
15 April 2017
31 March 2017
| Citation



Human actions recognition generally uses geometric or statistical characteristics as training data. The geometric characteristics of the image and waveform display can be described by Pulse coupled neural network (PCNN). Experiential model decomposition (EMD) algorithm can be used for the feature extraction of waveforms. Therefore, we propose a motion feature description algorithm combined with PCNN and EMD. The experimental results show that using the method of PCNN-EMD-based feature recognition can obtain a high accuracy rate, while using the fusion feature of KPCA has a higher recognition rate.


human action recognition, EMD, gabor, PCNN, KPCA

1. Introduction
2. Image Preprocessing
3. Action Image Feature Extraction by Texture
4 Statistical Features
5. Experiment Analysis
6. Conclusion

1. P.A Dhulekar, S.T Gandhe, H Chitte, K Pardeshi, Human Action Recognition: An Overview, 2016, Proceedings of the International Conference on Data Engineering and Communication Technology, 2016, Lavasa, India, pp 481-488.

2.  A. Sadhu, An Application of Multivariate Empirical Mode Decomposition Towards Structural Modal Identification, 2016, Rotating Machinery, Hybrid Test Methods, Vibro-Acoustics & Laser Vibrometry, vol. 8, no.1, pp 303-309.

3.  Y. Wang, G Mori, Human action recognition by semilatent topic models, 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31 , pp.1762-1774

4. R.M.G Tello, S.M.T Müller, T.F Bastosfilho, A Ferreira, Comparison of new techniques based on EMD for control of a SSVEP-BCI, 2014, IEEE International Symposium on Industrial Electronics ISIE IEEE, pp.992-997.

5. Y. Juan, G. Stevenson, S. Dobson, KCAR: A knowledge-driven approach for concurrent activity recognition, 2015, Pervasive & Mobile Computing, vol. 19, pp 47-70.

6.  H.B Pan, J Li, Online human action recognition based on improved dynamic time warping, 2016, IEEE International Conference on Big Data Analysis, 2016, Washington DC, USA,pp 1-5.

7. D. Yu, J. Cheng, Y. Yang, Application of EMD method and Hilbert spectrum to the fault diagnosis of roller bearings, 2005, Mechanical Systems & Signal Processing,  vol. 19, no. 2, pp 259-270.

8. J.J Seo, H.I Kim, W.D Neve, M.R Yong, Effective and efficient human action recognition using dynamic frame skipping and trajectory rejection ☆, 2016, Image & Vision Computing , vol.58,  pp 76-85.

9. Z.B Wang,  X.G Sun, Y.N Zhang, Z Ying, Y.D Ma, Leaf recognition based on PCNN, 2016,Neural Computing and Applications ,vol. 27, no. 4, pp 899-908.

10. A.K Helmy, G.S El-Taweel, Image segmentation scheme based on SOM–PCNN in frequency domain, 2016, Applied Soft Computing, vol.40, pp 405-415.

11. M Van, H.J Kang, Bearing-fault diagnosis using non-local means algorithm and empirical mode decomposition-based feature extraction and two-stage feature selection, 2015, Iet Science Measurement Technology, vol. 9, no.6, pp 671-680.

12. M. Kartz, Fractals and the analysis of waveforms, 1988, Computers in Biology & Medicine, vol. 18, no. 3, pp.145-156.

13. J.C Niebles, C.W Chen, F.F Li. Modeling Temporal Structure of Decomposable Motion Segments for Activity Classification, 2010, 11th European Conference on Computer Vision (ECCV), 2010, Heraklion, Crete, Greece, pp.\392-405.

14.  C.C Chang, C.J Lin, LIBSVM: A Library for Support Vector Machines, 2011, ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 1-27.

15. I. Laptev, M Marszalek, C Schmid, B Rozenfeld, Learning realistic human actions from movies, 2008, computer vision and pattern recognition, 2008, Anchorage, AK, USA, pp.1-8.

16. S.B Ginsburg, G Lee, S Ali, A Madabhushi, Feature Importance in Nonlinear Embeddings (FINE): Applications in Digital Pathology, 2015, IEEE Transactions on Medical Imaging, vol. 35, no.1, pp.76-88.