An Improved Human Pose Estimation Using Deep Neural Network for the Optimization of Human-Robot Interactions

Authors

Abstract

Research shows that mobile support robots are becoming increasingly valuable in various situations, such as monitoring daily activities, providing medical services, and supporting elderly people. For interpreting human conduct and intention, these robots largely depend on human activity recognition (HAR). However, previous awareness of human appearance (human recognition) and recognition of humans for monitoring (human surveillance) are necessary to enable HAR to work with assistance robots. Al-so However, multimodal human behavior recognition is constrained by costly hardware and a rigorous setting, making it challenging to effectively balance inference accuracy and system expense. Naturally, a key problem in human pose or behavior detection is the ability to extract additional purposeful interpretations from easily accessible live videos. In this paper, we employ human pose detection to address the problem and provide well-crafted assessment measures to show demonstrate the effectiveness of our approach, which utilizes deep neural networks (DNNs) This article proposes a human intention detection system that anticipates human intentions in human- and robot-centered scenarios by utilizing the incorporation of visual information as well as input features, including human positions, head orientations, and critical skeletal key points. Our goal is to aid human-robot interactions by helping mobile robots through real-time human pose prediction using the recognition of 18 distinct key points in the body's structure. The effectiveness of this strategy is demonstrated by the suggested study using Python, and the results of simulations verify the reliability and accuracy of this method.

Additional Files

Published

2025-10-13

Issue

Section

Biomedical Engineering