Loading…
Visionary vigilance: Optimized YOLOV8 for fallen person detection with large-scale benchmark dataset
Falls pose a significant risk to elderly people, patients with diseases such as neurological disorders, cardiovascular diseases, and disabled children. This highlights the need for real-time intelligent fall detection (FD) systems for quick relief leading to assisted living. The existing attempts ar...
Saved in:
Published in: | Image and vision computing 2024-09, Vol.149, p.105195, Article 105195 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Falls pose a significant risk to elderly people, patients with diseases such as neurological disorders, cardiovascular diseases, and disabled children. This highlights the need for real-time intelligent fall detection (FD) systems for quick relief leading to assisted living. The existing attempts are often based on multimodal approaches which are computationally expensive due to multi-sensor integration. The computer vision (CV) based era for FD needs the deployment of state-of-the-art (SOTA) networks with progressive enhancements to grasp falls effectively. However, CV-based systems often lack the ability to operate efficiently in real-time and the attempts for visual intelligence are usually not integrated at feasible stages of the networks. More importantly, the lack of large-scale well-annotated benchmark datasets limits the ability of FD in challenging and complex environments. To bridge the research gaps, we proposed an enhanced version of YOLOV8 for FD. Our research presents significant contributions by addressing these limitations through three key contributions. Initially, a comprehensive large-scale dataset is introduced which comprises approximately 10,500 image samples with corresponding annotations. The dataset encompasses diverse environmental conditions and scenarios, facilitating the generalization ability for the models. Then, progressive enhancements to the YOLOV8S model are proposed, integrating a focus module in the backbone to optimize feature extraction. Moreover, the convolutional block attention modules (CBAMs) are integrated at the feasible stages of the network to improve spatial and channel contexts for more accurate detection, especially in complex scenes. Finally, an extensive empirical evaluation showcases the superiority of the proposed network over 13 SOTA techniques, substantiated by meticulous benchmarking and qualitative validation across varied environments. The empirical findings and analysis of multiple factors such as model performance, size, and processing time prove that the suggested network displays impressive results. Datasets with annotations, results, and the ways of progressive modifications in the code will be available to the research community at the link https://github.com/habib1402/Fall-Detection-DiverseFall10500
•Developed DiverseFALL10500: a comprehensive dataset with 10,500 annotated images.•Optimized YOLOv8s with focus module and CBAM integration for improved performance.•Demonstrated superiority over |
---|---|
ISSN: | 0262-8856 |
DOI: | 10.1016/j.imavis.2024.105195 |