Falls are a major cause of accidents that can lead to serious injuries, especially among geriatric populations worldwide. Ensuring constant supervision in hospitals or smart environments while maintaining comfort and privacy is practically impossible. Therefore, fall detection has become a significant area of research, particularly with the use of multimodal sensors. The lack of efficient techniques for automatic fall detection hampers the creation of effective preventative tools capable of identifying falls during physical exercise in long-term care environments. The primary goal of this article is to examine the benefits of using multimodal sensors to enhance the precision of fall detection systems.
The proposed paper combines time–frequency features of inertial sensors with skeleton-based modeling of depth sensors to extract features. These multimodal sensors are then integrated using a fusion technique. Optimization and a modified K-Ary classifier are subsequently applied to the resultant fused data.
The suggested model achieved an accuracy of 97.97% on the UP-Fall Detection dataset and 97.89% on the UR-Fall Detection dataset.
This indicates that the proposed model outperforms state-of-the-art classification results. Additionally, the proposed model can be utilized as an IoT-based solution, effectively promoting the development of tools to prevent fall-related injuries.