AUTHOR=Hu Lianhua , Ren Jiaqi TITLE=YOLO-LHD: an enhanced lightweight approach for helmet wearing detection in industrial environments JOURNAL=Frontiers in Built Environment VOLUME=9 YEAR=2023 URL=https://www.frontiersin.org/journals/built-environment/articles/10.3389/fbuil.2023.1288445 DOI=10.3389/fbuil.2023.1288445 ISSN=2297-3362 ABSTRACT=
Establishing a lightweight yet high-precision object detection algorithm is paramount for accurately assessing workers’ helmet-wearing status in intricate industrial settings. Helmet detection is inherently challenging due to factors like the diminutive target size, intricate backgrounds, and the need to strike a balance between model compactness and detection accuracy. In this paper, we propose YOLO-LHD (You Only Look Once-Lightweight Helmet Detection), an efficient framework built upon the YOLOv8 object detection model. The proposed approach enhances the model’s ability to detect small targets in complex scenes by incorporating the Coordinate attention mechanism and Focal loss function, which introduce high-resolution features and large-scale detection heads. Additionally, we integrate the improved Ghostv2 module into the backbone feature extraction network to further improve the balance between model accuracy and size. We evaluated our method on MHWD dataset established in this study and compared it with the baseline model YOLOv8n. The proposed YOLO-LHD model achieved a reduction of 66.1% in model size while attaining the best 94.3% mAP50 with only 0.86M parameters. This demonstrates the effectiveness of the proposed approach in achieving lightweight deployment and high-precision helmet detection.