The Human Motion Behavior Recognition by Deep Learning Approach and the Internet of Things.
DOI:
https://doi.org/10.9781/ijimai.2024.07.004Keywords:
Behavior Recognition, Convolutional Neural Network, Human Body Movement, Internet of thingsAbstract
This paper is dedicated to exploring the practical implementation of deep learning and Internet of Things (IoT) technology within systems designed for recognizing human motion behavior. It places a particular emphasis on evaluating performance in complex environments, aiming to mitigate challenges such as poor robustness and high computational workload encountered in conventional human motion behavior recognition approaches by employing Convolutional Neural Networks (CNN). The primary focus is on enhancing the performance of human motion behavior recognition systems for real-world scenarios, optimizing them for real-time accuracy, and enhancing their suitability for practical applications. Specifically, the paper investigates human motion behavior recognition using CNN, where the parameters of the CNN model are fine-tuned to improve recognition performance. The paper commences by delineating the process and methodology employed for human motion recognition, followed by an in-depth exploration of the CNN model's application in recognizing human motion behavior. To acquire data depicting human motion behavior in authentic settings, the Internet of Things (IoT) is utilized for extracting relevant information from the living environment. The dataset chosen for human motion behavior recognition is the Royal Institute of Technology (KTH) database. The analysis demonstrates that the network training loss function reaches a minimum value of 0.0001. Leveraging the trained CNN model, the recognition accuracy for human motion behavior achieves peak performance, registering an average accuracy of 94.41%. Notably, the recognition accuracy for static motion behavior generally exceeds that for dynamic motion behavior across different models. The CNN-based human motion behavior recognition method exhibits promising results in both static and dynamic behavior recognition scenarios. Furthermore, the paper advocates for the use of IoT in collecting human motion behavior data in real-world living environments, contributing to the advancement of human motion behavior recognition technology and its application in diverse domains such as intelligent surveillance and health management. The research findings carry significant implications for furthering the development of human motion behavior recognition technology and enhancing its applications in areas such as intelligent surveillance and health management.
Downloads
References
S. Wan, L. Qi, X. Xu, C. Tong, and Z. Gu, “Deep learning models for realtime human activity recognition with smartphones,” Mobile Networks and Applications, vol. 25, no. 2, pp. 743-755, 2020.
H. Yan, Y. Zhang, Y. Wang, and K. Xu, “WiAct: A passive WiFi-based human activity recognition system,” IEEE Sensors Journal, vol. 20, no. 1, pp. 296-305,2020.
A. Rudenk, L. Palmieri, M. Herman, K. M. Kitani, D. M. Gavrila, & K. O. Arras, “Human motion trajectory prediction: A survey,” The International Journal of Robotics Research, vol. 8, no. 39, pp. 895-935, 2020.
A. Jalal, N. Khalid, & K. Kim, “Automatic recognition of human interaction via hybrid descriptors and maximum entropy markov model using depth sensors,” Entropy, vol. 22, no. 8, pp. 817, 2020.
B. Sun, G. Cheng, Q. Dai, T. Chen, W. Liu, & X. Xu, “Human motion intention recognition based on EMG signal and angle signal,” Cognitive Computation and Systems, vol. 1, no. 3, pp. 37-47, 2021.
V. Bianchi, M. Bassoli, G. Lombardo, P. Fornacciari, M. Mordonini, & I. De Munari, “IoT wearable sensor and deep learning: An integrated approach for personalized human activity recognition in a smart home environment,” IEEE Internet of Things Journal, vol. 5, no. 6, pp. 8553-8562, 2019
W. Sousa Lima, E. Souto, K. El-Khatib, R. Jalali, & J. Gama, “Human activity recognition using inertial sensors in a smartphone: An overview,” Sensors, vol. 14, no. 19, pp. 3213, 2019.
Y. Guo, Z. Xuan, & L. Song, “Foreground target extraction method based on neighbourhood pixel intensity correction,” Australian Journal of Mechanical Engineering, vol. 3, no. 19, pp. 251-260, 2021.
C. Wei, G. Xue-Bao, T. Feng, S. Ying, W. Wei-Hong, S. Hong-Ri, & K. Xuan, “Seismic velocity inversion based on CNN-LSTM fusion deep neural network,” Applied Geophysics, vol. 4, no. 18, pp. 499-514, 2021.
L. A. Bolaños, D. Xiao, N. L. Ford, J. M. LeDue, P. K. Gupta, C. Doebeli, & T. H. Murphy, “A three-dimensional virtual mouse generates synthetic training data for behavioral analysis,” Nature Methods, vol. 4, no. 18, pp. 378-381, 2021.
S. Chebbout, & H. F. Merouani, “A hybrid codebook model for object categorization using two-way clustering based codebook generation method,” International Journal of Computers and Applications, vol. 2, no. 44, pp. 178-186, 2022.
Y. Peng, H. Tao, W. Li, H. Yuan, & T. Li, “Dynamic gesture recognition based on feature fusion network and variant ConvLSTM,” IET Image Processing, vol. 11, no. 14, pp. 2480-2486, 2020.
H. Chen, C. Hu, F. Lee, C. Lin, W. Yao, L. Chen, & Q. Chen, “A supervised video hashing method based on a deep 3d convolutional neural network for large-scale video retrieval,” Sensors, vol. 9, no. 21, pp. 3094, 2021.
G. Lin, Y. Zhang, G. Xu, & Q. Zhang, “Smoke detection on video sequences using 3D convolutional neural networks,” Fire Technology, vol. 5, no. 55, pp. 1827-1847, 2019.
R. Zhi, H. Xu, M. Wan, & T. Li, “Combining 3D convolutional neural networks with transfer learning by supervised pre-training for facial micro-expression recognition,” IEICE Transactions on Information and Systems, vol. 5, no. 102, pp. 1054-1064, 2019.
H. Jin, J. Geng, Y. Yin, M. Hu, G. Yang, S. Xiang & H. Zhang, “Fully automated intracranial aneurysm detection and segmentation from digital subtraction angiography series using an end-to-end spatiotemporal deep neural network,” Journal of NeuroInterventional Surgery, vol. 10, no. 12, pp. 1023-1027, 2020.
X. Lu, H. Yao, S. Zhao, X. Sun, & S. Zhang, “Action recognition with multi-scale trajectory-pooled 3D convolutional descriptors,” Multimedia Tools and Applications, vol. 1, no. 78, pp. 507-523, 2019.
P. Guenzi, & E. J. Nijssen, “The impact of digital transformation on salespeople: an empirical investigation using the JD-R model,” Journal of Personal Selling & Sales ManageMent, vol. 2, no. 41, pp. 130-149, 2021.
A. Jalal, M. A. K. Quaid, & K. Kim, “A wrist worn acceleration based human motion analysis and classification for ambient smart home system,” Journal of Electrical Engineering & Technology, vol. 4, no. 14, pp. 1733-1739, 2019.
M. A. K. Quaid, & A. Jalal, “Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm,” Multimedia Tools and Applications, vol. 9, no. 79, pp. 6061- 6083, 2020.
L. Huang, X. Zhao, & K. Huang, “Got-10k: A large high-diversity benchmark for generic object tracking in the wild,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 5. no. 43, pp. 1562-1577, 2019.
M. A. Khan, Y. D. Zhang, S. A. Khan, M. Attique, A. Rehman, & S. Seo, “A resource conscious human action recognition framework using 26-layered deep convolutional neural network,” Multimedia Tools and Applications, vol. 28, no. 80, pp. 35827-35849, 2021.
W. Qi, H. Su, C. Yang, G. Ferrigno, E. De Momi, & A. Aliverti, “A fast and robust deep convolutional neural networks for complex human activity recognition using smartphone,” Sensors, vol. 17, no. 19, pp. 3731, 2019.
Q. Teng, K. Wang, L. Zhang, & J. He, “The layer-wise training convolutional neural networks using local loss for sensor-based human activity recognition,” IEEE Sensors Journal, vol. 13, no. 20, pp. 7265-7274, 2020.
K. Wang, J. He, & L. Zhang, “Attention-based convolutional neural network for weakly labeled human activities’ recognition with wearable sensors,” IEEE Sensors Journal, vol. 17, no. 19, pp. 7598-7604, 2019.
N. K. Benamara, E. Zigh, T. B. Stambouli, and M. Keche, “Towards a robust thermal-visible heterogeneous face recognition approach based on a cycle generative adversarial network,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 7, no. 4, pp. 132-145, 2022.
D. Dejene, B. Tiwari, and V. Tiwari, “TD2SecIoT: temporal, data-driven and dynamic network layer based security architecture for industrial IoT,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 6, no. 4, pp. 146-156, 2020.
A. J. F. García, J. C. P. Rodríguez, A. P. Ramos, F. S. Figueroa, and J. D. Gutiérrez, “CompareML: a novel approach to supporting preliminary data analysis decision making,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 7, no. 4, pp. 225-238, 2022.
S. M. Lezcano, F. López, and A. C. Bellot, “Data science techniques for COVID-19 in intensive care units,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 6, no. 4, pp. 8-17, 2020.
M. L. Ibáñez, A. R. Hernández, B. Manero, and M. G. Mata-García, “Computer entertainment technologies for the visually impaired: an overview,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 7, no. 4, pp. 53-68, 2022.
K. R. Gsangaya, S. S. H. Hajjaj, M. T. H. Sultan, & L. S. Hua, “Portable, wireless, and effective internet of things-based sensors for precision agriculture,” International Journal of Environmental Science and Technology, vol. 9, no. 17, pp. 3901-3916, 2022.
X. Gao, P. Pishdad-Bozorgi, D. R. Shelden, & S. Tang, “Internet of things enabled data acquisition framework for smart building applications,” Journal of Construction Engineering and Management, vol. 2, no. 147, pp. 04020169, 2021.
A. Abdelbaky, & S, Aly. “Human action recognition using short-time motion energy template images and PCANet features,” Neural Computing and Applications, vol. 16, no. 32, pp. 12561-12574, 2020.
J. Guo, P. V. Borges, C. Park, & A. Gawel, “Local descriptor for robust place recognition using lidar intensity,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 1470-1477, 2019.
Y. Liu, X. Yao, Z. Gu, Z. Zhou, X. Liu, X. Chen, & S. Wei.“Study of the Automatic Recognition of Landslides by Using InSAR Images and the Improved Mask R-CNN Model in the Eastern Tibet Plateau,” Remote Sensing, vol. 14, no. 14, pp. 3362, 2022.
D. Jeong, B. G. Kim, & S. Y. Dong, “Deep joint spatiotemporal network (DJSTN) for efficient facial expression recognition,” Sensors, vol. 7, no. 20, pp. 1936, 2020.
M. Dong, Z. Fang, Y. Li, S. Bi, & J. Chen, “AR3D: attention residual 3D network for human action recognition,” Sensors, vol. 5, no. 21, pp. 1656, 2021.
H. Mahmoud, “Modern architectures convolutional neural networks in human activity recognition,” Advances in Computing and Engineering, vol. 1, no. 2, pp. 1-16, 2022.
Downloads
Published
-
Abstract150
-
PDF31