Loading…

Optimizing Road Safety: Advancements in Lightweight YOLOv8 Models and GhostC2f Design for Real-Time Distracted Driving Detection

The rapid detection of distracted driving behaviors is crucial for enhancing road safety and preventing traffic accidents. Compared with the traditional methods of distracted-driving-behavior detection, the YOLOv8 model has been proven to possess powerful capabilities, enabling it to perceive global...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Switzerland), 2023-10, Vol.23 (21), p.8844
Main Authors: Du, Yingjie, Liu, Xiaofeng, Yi, Yuwei, Wei, Kun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c430t-7afb76c50a29bec849c017da5c92bcb063c486aeeaacddab32792b3cc09633393
cites cdi_FETCH-LOGICAL-c430t-7afb76c50a29bec849c017da5c92bcb063c486aeeaacddab32792b3cc09633393
container_end_page
container_issue 21
container_start_page 8844
container_title Sensors (Basel, Switzerland)
container_volume 23
creator Du, Yingjie
Liu, Xiaofeng
Yi, Yuwei
Wei, Kun
description The rapid detection of distracted driving behaviors is crucial for enhancing road safety and preventing traffic accidents. Compared with the traditional methods of distracted-driving-behavior detection, the YOLOv8 model has been proven to possess powerful capabilities, enabling it to perceive global information more swiftly. Currently, the successful application of GhostConv in edge computing and embedded systems further validates the advantages of lightweight design in real-time detection using large models. Effectively integrating lightweight strategies into YOLOv8 models and reducing their impact on model performance has become a focal point in the field of real-time distracted driving detection based on deep learning. Inspired by GhostConv, this paper presents an innovative GhostC2f design, aiming to integrate the idea of linear transformation to generate more feature maps without additional computation into YOLOv8 for real-time distracted-driving-detection tasks. The goal is to reduce model parameters and computational load. Additionally, enhancements have been made to the path aggregation network (PAN) to amplify multi-level feature fusion and contextual information propagation. Furthermore, simple attention mechanisms (SimAMs) are introduced to perform self-normalization on each feature map, emphasizing feature maps with valuable information and suppressing redundant information interference in complex backgrounds. Lastly, the nine distinct distracted driving types in the publicly available SFDDD dataset were expanded to 14 categories, and nighttime scenarios were introduced. The results indicate a 5.1% improvement in model accuracy, with model weight size and computational load reduced by 36.7% and 34.6%, respectively. During 30 real vehicle tests, the distracted-driving-detection accuracy reached 91.9% during daylight and 90.3% at night, affirming the exceptional performance of the proposed model in assisting distracted driving detection when driving and contributing to accident-risk reduction.
doi_str_mv 10.3390/s23218844
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_d3c7988c8d674e849d300a7f968c6946</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A772536161</galeid><doaj_id>oai_doaj_org_article_d3c7988c8d674e849d300a7f968c6946</doaj_id><sourcerecordid>A772536161</sourcerecordid><originalsourceid>FETCH-LOGICAL-c430t-7afb76c50a29bec849c017da5c92bcb063c486aeeaacddab32792b3cc09633393</originalsourceid><addsrcrecordid>eNpdUsFu1DAQjRBIlMKBP7DEBQ5pHTsb29xWu6VUWrRSKQdO0cSepF4l9mJ7F5UTn47DVhVClsaj8Zv35llTFG8resG5opeRcVZJWdfPirOqZnUpGaPP_8lfFq9i3FHKOOfyrPi93Sc72V_WDeTWgyFfocf08JEszRGcxgldisQ6srHDffqJcyTft5vtUZIv3uAYCThDru99TCvWkzVGOzjS-0BuEcbyzk5I1jamADqhIetgj7PWGhPqZL17XbzoYYz45vE-L759urpbfS432-ub1XJT6prTVAroO9HoBQWmOtSyVppWwsBCK9bpjjZc17IBRABtDHScifzAtaaqyU4VPy9uTrzGw67dBztBeGg92PZvwYehhZCsHrE1XAslpZamETVmKcMpBdGrRupG1U3men_i2gf_44AxtZONGscRHPpDbJmUSqmaC5mh7_6D7vwhuOx0RkkuBKfzcBcn1ABZ37rez_-Vj8HJau-wt7m-FIIteFM1VW74cGrQwccYsH9yVNF2XoT2aRH4H6CVpDg</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2888377309</pqid></control><display><type>article</type><title>Optimizing Road Safety: Advancements in Lightweight YOLOv8 Models and GhostC2f Design for Real-Time Distracted Driving Detection</title><source>Publicly Available Content Database</source><source>PubMed Central</source><creator>Du, Yingjie ; Liu, Xiaofeng ; Yi, Yuwei ; Wei, Kun</creator><creatorcontrib>Du, Yingjie ; Liu, Xiaofeng ; Yi, Yuwei ; Wei, Kun</creatorcontrib><description>The rapid detection of distracted driving behaviors is crucial for enhancing road safety and preventing traffic accidents. Compared with the traditional methods of distracted-driving-behavior detection, the YOLOv8 model has been proven to possess powerful capabilities, enabling it to perceive global information more swiftly. Currently, the successful application of GhostConv in edge computing and embedded systems further validates the advantages of lightweight design in real-time detection using large models. Effectively integrating lightweight strategies into YOLOv8 models and reducing their impact on model performance has become a focal point in the field of real-time distracted driving detection based on deep learning. Inspired by GhostConv, this paper presents an innovative GhostC2f design, aiming to integrate the idea of linear transformation to generate more feature maps without additional computation into YOLOv8 for real-time distracted-driving-detection tasks. The goal is to reduce model parameters and computational load. Additionally, enhancements have been made to the path aggregation network (PAN) to amplify multi-level feature fusion and contextual information propagation. Furthermore, simple attention mechanisms (SimAMs) are introduced to perform self-normalization on each feature map, emphasizing feature maps with valuable information and suppressing redundant information interference in complex backgrounds. Lastly, the nine distinct distracted driving types in the publicly available SFDDD dataset were expanded to 14 categories, and nighttime scenarios were introduced. The results indicate a 5.1% improvement in model accuracy, with model weight size and computational load reduced by 36.7% and 34.6%, respectively. During 30 real vehicle tests, the distracted-driving-detection accuracy reached 91.9% during daylight and 90.3% at night, affirming the exceptional performance of the proposed model in assisting distracted driving detection when driving and contributing to accident-risk reduction.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s23218844</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Algorithms ; attention mechanism ; Behavior ; Comparative analysis ; Datasets ; Deep learning ; Distracted driving ; Embedded systems ; Fatalities ; feature fusion ; Fourier transforms ; GhostConv ; Neural networks ; Physiology ; Processing speed ; Traffic accidents &amp; safety ; YOLOv8n</subject><ispartof>Sensors (Basel, Switzerland), 2023-10, Vol.23 (21), p.8844</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c430t-7afb76c50a29bec849c017da5c92bcb063c486aeeaacddab32792b3cc09633393</citedby><cites>FETCH-LOGICAL-c430t-7afb76c50a29bec849c017da5c92bcb063c486aeeaacddab32792b3cc09633393</cites><orcidid>0000-0002-3454-6939</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2888377309/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2888377309?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,37013,44590,75126</link.rule.ids></links><search><creatorcontrib>Du, Yingjie</creatorcontrib><creatorcontrib>Liu, Xiaofeng</creatorcontrib><creatorcontrib>Yi, Yuwei</creatorcontrib><creatorcontrib>Wei, Kun</creatorcontrib><title>Optimizing Road Safety: Advancements in Lightweight YOLOv8 Models and GhostC2f Design for Real-Time Distracted Driving Detection</title><title>Sensors (Basel, Switzerland)</title><description>The rapid detection of distracted driving behaviors is crucial for enhancing road safety and preventing traffic accidents. Compared with the traditional methods of distracted-driving-behavior detection, the YOLOv8 model has been proven to possess powerful capabilities, enabling it to perceive global information more swiftly. Currently, the successful application of GhostConv in edge computing and embedded systems further validates the advantages of lightweight design in real-time detection using large models. Effectively integrating lightweight strategies into YOLOv8 models and reducing their impact on model performance has become a focal point in the field of real-time distracted driving detection based on deep learning. Inspired by GhostConv, this paper presents an innovative GhostC2f design, aiming to integrate the idea of linear transformation to generate more feature maps without additional computation into YOLOv8 for real-time distracted-driving-detection tasks. The goal is to reduce model parameters and computational load. Additionally, enhancements have been made to the path aggregation network (PAN) to amplify multi-level feature fusion and contextual information propagation. Furthermore, simple attention mechanisms (SimAMs) are introduced to perform self-normalization on each feature map, emphasizing feature maps with valuable information and suppressing redundant information interference in complex backgrounds. Lastly, the nine distinct distracted driving types in the publicly available SFDDD dataset were expanded to 14 categories, and nighttime scenarios were introduced. The results indicate a 5.1% improvement in model accuracy, with model weight size and computational load reduced by 36.7% and 34.6%, respectively. During 30 real vehicle tests, the distracted-driving-detection accuracy reached 91.9% during daylight and 90.3% at night, affirming the exceptional performance of the proposed model in assisting distracted driving detection when driving and contributing to accident-risk reduction.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>attention mechanism</subject><subject>Behavior</subject><subject>Comparative analysis</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Distracted driving</subject><subject>Embedded systems</subject><subject>Fatalities</subject><subject>feature fusion</subject><subject>Fourier transforms</subject><subject>GhostConv</subject><subject>Neural networks</subject><subject>Physiology</subject><subject>Processing speed</subject><subject>Traffic accidents &amp; safety</subject><subject>YOLOv8n</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdUsFu1DAQjRBIlMKBP7DEBQ5pHTsb29xWu6VUWrRSKQdO0cSepF4l9mJ7F5UTn47DVhVClsaj8Zv35llTFG8resG5opeRcVZJWdfPirOqZnUpGaPP_8lfFq9i3FHKOOfyrPi93Sc72V_WDeTWgyFfocf08JEszRGcxgldisQ6srHDffqJcyTft5vtUZIv3uAYCThDru99TCvWkzVGOzjS-0BuEcbyzk5I1jamADqhIetgj7PWGhPqZL17XbzoYYz45vE-L759urpbfS432-ub1XJT6prTVAroO9HoBQWmOtSyVppWwsBCK9bpjjZc17IBRABtDHScifzAtaaqyU4VPy9uTrzGw67dBztBeGg92PZvwYehhZCsHrE1XAslpZamETVmKcMpBdGrRupG1U3men_i2gf_44AxtZONGscRHPpDbJmUSqmaC5mh7_6D7vwhuOx0RkkuBKfzcBcn1ABZ37rez_-Vj8HJau-wt7m-FIIteFM1VW74cGrQwccYsH9yVNF2XoT2aRH4H6CVpDg</recordid><startdate>20231031</startdate><enddate>20231031</enddate><creator>Du, Yingjie</creator><creator>Liu, Xiaofeng</creator><creator>Yi, Yuwei</creator><creator>Wei, Kun</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7X8</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-3454-6939</orcidid></search><sort><creationdate>20231031</creationdate><title>Optimizing Road Safety: Advancements in Lightweight YOLOv8 Models and GhostC2f Design for Real-Time Distracted Driving Detection</title><author>Du, Yingjie ; Liu, Xiaofeng ; Yi, Yuwei ; Wei, Kun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c430t-7afb76c50a29bec849c017da5c92bcb063c486aeeaacddab32792b3cc09633393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>attention mechanism</topic><topic>Behavior</topic><topic>Comparative analysis</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Distracted driving</topic><topic>Embedded systems</topic><topic>Fatalities</topic><topic>feature fusion</topic><topic>Fourier transforms</topic><topic>GhostConv</topic><topic>Neural networks</topic><topic>Physiology</topic><topic>Processing speed</topic><topic>Traffic accidents &amp; safety</topic><topic>YOLOv8n</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Du, Yingjie</creatorcontrib><creatorcontrib>Liu, Xiaofeng</creatorcontrib><creatorcontrib>Yi, Yuwei</creatorcontrib><creatorcontrib>Wei, Kun</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>MEDLINE - Academic</collection><collection>Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Du, Yingjie</au><au>Liu, Xiaofeng</au><au>Yi, Yuwei</au><au>Wei, Kun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimizing Road Safety: Advancements in Lightweight YOLOv8 Models and GhostC2f Design for Real-Time Distracted Driving Detection</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><date>2023-10-31</date><risdate>2023</risdate><volume>23</volume><issue>21</issue><spage>8844</spage><pages>8844-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>The rapid detection of distracted driving behaviors is crucial for enhancing road safety and preventing traffic accidents. Compared with the traditional methods of distracted-driving-behavior detection, the YOLOv8 model has been proven to possess powerful capabilities, enabling it to perceive global information more swiftly. Currently, the successful application of GhostConv in edge computing and embedded systems further validates the advantages of lightweight design in real-time detection using large models. Effectively integrating lightweight strategies into YOLOv8 models and reducing their impact on model performance has become a focal point in the field of real-time distracted driving detection based on deep learning. Inspired by GhostConv, this paper presents an innovative GhostC2f design, aiming to integrate the idea of linear transformation to generate more feature maps without additional computation into YOLOv8 for real-time distracted-driving-detection tasks. The goal is to reduce model parameters and computational load. Additionally, enhancements have been made to the path aggregation network (PAN) to amplify multi-level feature fusion and contextual information propagation. Furthermore, simple attention mechanisms (SimAMs) are introduced to perform self-normalization on each feature map, emphasizing feature maps with valuable information and suppressing redundant information interference in complex backgrounds. Lastly, the nine distinct distracted driving types in the publicly available SFDDD dataset were expanded to 14 categories, and nighttime scenarios were introduced. The results indicate a 5.1% improvement in model accuracy, with model weight size and computational load reduced by 36.7% and 34.6%, respectively. During 30 real vehicle tests, the distracted-driving-detection accuracy reached 91.9% during daylight and 90.3% at night, affirming the exceptional performance of the proposed model in assisting distracted driving detection when driving and contributing to accident-risk reduction.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/s23218844</doi><orcidid>https://orcid.org/0000-0002-3454-6939</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1424-8220
ispartof Sensors (Basel, Switzerland), 2023-10, Vol.23 (21), p.8844
issn 1424-8220
1424-8220
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_d3c7988c8d674e849d300a7f968c6946
source Publicly Available Content Database; PubMed Central
subjects Accuracy
Algorithms
attention mechanism
Behavior
Comparative analysis
Datasets
Deep learning
Distracted driving
Embedded systems
Fatalities
feature fusion
Fourier transforms
GhostConv
Neural networks
Physiology
Processing speed
Traffic accidents & safety
YOLOv8n
title Optimizing Road Safety: Advancements in Lightweight YOLOv8 Models and GhostC2f Design for Real-Time Distracted Driving Detection
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T06%3A20%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimizing%20Road%20Safety:%20Advancements%20in%20Lightweight%20YOLOv8%20Models%20and%20GhostC2f%20Design%20for%20Real-Time%20Distracted%20Driving%20Detection&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Du,%20Yingjie&rft.date=2023-10-31&rft.volume=23&rft.issue=21&rft.spage=8844&rft.pages=8844-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s23218844&rft_dat=%3Cgale_doaj_%3EA772536161%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c430t-7afb76c50a29bec849c017da5c92bcb063c486aeeaacddab32792b3cc09633393%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2888377309&rft_id=info:pmid/&rft_galeid=A772536161&rfr_iscdi=true