Loading…

YOLOv5-MHSA-DS: an efficient pig detection and counting method

Accurate and efficient livestock detection and counting are crucial for agricultural intelligence. To address the obstacles created by traditional manual methods and limitations of current vision technology, we introduce YOLOv5-MHSA-DS, a novel model that integrates YOLOv5 framework with Multi-Head...

Full description

Saved in:
Bibliographic Details
Published in:Systems science & control engineering 2024-12, Vol.12 (1)
Main Authors: Hao, Wangli, Zhang, Li, Xu, Shu-ai, Han, Meng, Li, Fuzhong, Yang, Hua
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c399t-9f7479916a428e40133357d98b983240f26708c03209cf73997f2323a61032ab3
container_end_page
container_issue 1
container_start_page
container_title Systems science & control engineering
container_volume 12
creator Hao, Wangli
Zhang, Li
Xu, Shu-ai
Han, Meng
Li, Fuzhong
Yang, Hua
description Accurate and efficient livestock detection and counting are crucial for agricultural intelligence. To address the obstacles created by traditional manual methods and limitations of current vision technology, we introduce YOLOv5-MHSA-DS, a novel model that integrates YOLOv5 framework with Multi-Head Self-Attention and DySample modules. Multi-Head Self-Attention excels at capturing diverse features, enhancing pig detection and counting accuracy. On the other hand, DySample dynamically adjusts sampling strategies based on input data, allowing it to focus on the most critical parts of the image and thereby significantly improving pig detection and counting performance. To validate the generalization and robustness of our proposed model, we conducted ablation experiments. The results demonstrate that YOLOv5-MHSA-DS achieves an impressive mAP of 93.8% and counting accuracy of 95.0%, surpassing other models by significant margins of 12.2% and 19.0%, respectively.
doi_str_mv 10.1080/21642583.2024.2394428
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3145933311</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_02730eb5595b4460b43305baa4b4a632</doaj_id><sourcerecordid>3145933311</sourcerecordid><originalsourceid>FETCH-LOGICAL-c399t-9f7479916a428e40133357d98b983240f26708c03209cf73997f2323a61032ab3</originalsourceid><addsrcrecordid>eNp9UE1LAzEUDKJgqf4EYcHz1nzubjyIpX60UOmhevAUstmkpmw3NZsq_fdm3SqezOWFeTPzhgHgAsERggW8wiijmBVkhCGmI0w4pbg4AoMOT7vF8Z__KThv2zWMr2AoMgfg5nUxX3yw9Gm6HKd3y-tENok2xiqrm5Bs7SqpdNAqWNfEVZUot2uCbVbJRoc3V52BEyPrVp8f5hC8PNw_T6bpfPE4m4znqSKch5SbnOaco0zGcJpCRAhhecWLkhcEU2hwlsNCQYIhVyaPmtxggonMUMRkSYZg1vtWTq7F1tuN9HvhpBXfgPMrIX2wqtYC4pxAXTLGWUlpBktKCGSllLSkMoumQ3DZe229e9_pNoi12_kmxhcEUcZjNoQii_Us5V3bem1-ryIouubFT_Oia14cmo-6215nG-P8Rn46X1ciyH3tvPGyUbY786_FF5NphCU</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3145933311</pqid></control><display><type>article</type><title>YOLOv5-MHSA-DS: an efficient pig detection and counting method</title><source>Taylor &amp; Francis Open Access</source><creator>Hao, Wangli ; Zhang, Li ; Xu, Shu-ai ; Han, Meng ; Li, Fuzhong ; Yang, Hua</creator><creatorcontrib>Hao, Wangli ; Zhang, Li ; Xu, Shu-ai ; Han, Meng ; Li, Fuzhong ; Yang, Hua</creatorcontrib><description>Accurate and efficient livestock detection and counting are crucial for agricultural intelligence. To address the obstacles created by traditional manual methods and limitations of current vision technology, we introduce YOLOv5-MHSA-DS, a novel model that integrates YOLOv5 framework with Multi-Head Self-Attention and DySample modules. Multi-Head Self-Attention excels at capturing diverse features, enhancing pig detection and counting accuracy. On the other hand, DySample dynamically adjusts sampling strategies based on input data, allowing it to focus on the most critical parts of the image and thereby significantly improving pig detection and counting performance. To validate the generalization and robustness of our proposed model, we conducted ablation experiments. The results demonstrate that YOLOv5-MHSA-DS achieves an impressive mAP of 93.8% and counting accuracy of 95.0%, surpassing other models by significant margins of 12.2% and 19.0%, respectively.</description><identifier>ISSN: 2164-2583</identifier><identifier>EISSN: 2164-2583</identifier><identifier>DOI: 10.1080/21642583.2024.2394428</identifier><language>eng</language><publisher>Macclesfield: Taylor &amp; Francis</publisher><subject>Ablation ; Accuracy ; Computer vision ; Critical components ; Deep learning ; DySample ; Efficiency ; Hogs ; Livestock ; multi-head self-attention ; Open access publishing ; pig detection and counting ; Systems science ; YOLOv5-MHSA-DS</subject><ispartof>Systems science &amp; control engineering, 2024-12, Vol.12 (1)</ispartof><rights>2024 The Author(s). Published by Informa UK Limited, trading as Taylor &amp; Francis Group. 2024</rights><rights>2024 The Author(s). Published by Informa UK Limited, trading as Taylor &amp; Francis Group. This work is licensed under the Creative Commons Attribution – Non-Commercial License http://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c399t-9f7479916a428e40133357d98b983240f26708c03209cf73997f2323a61032ab3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.tandfonline.com/doi/pdf/10.1080/21642583.2024.2394428$$EPDF$$P50$$Ginformaworld$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.tandfonline.com/doi/full/10.1080/21642583.2024.2394428$$EHTML$$P50$$Ginformaworld$$Hfree_for_read</linktohtml></links><search><creatorcontrib>Hao, Wangli</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Xu, Shu-ai</creatorcontrib><creatorcontrib>Han, Meng</creatorcontrib><creatorcontrib>Li, Fuzhong</creatorcontrib><creatorcontrib>Yang, Hua</creatorcontrib><title>YOLOv5-MHSA-DS: an efficient pig detection and counting method</title><title>Systems science &amp; control engineering</title><description>Accurate and efficient livestock detection and counting are crucial for agricultural intelligence. To address the obstacles created by traditional manual methods and limitations of current vision technology, we introduce YOLOv5-MHSA-DS, a novel model that integrates YOLOv5 framework with Multi-Head Self-Attention and DySample modules. Multi-Head Self-Attention excels at capturing diverse features, enhancing pig detection and counting accuracy. On the other hand, DySample dynamically adjusts sampling strategies based on input data, allowing it to focus on the most critical parts of the image and thereby significantly improving pig detection and counting performance. To validate the generalization and robustness of our proposed model, we conducted ablation experiments. The results demonstrate that YOLOv5-MHSA-DS achieves an impressive mAP of 93.8% and counting accuracy of 95.0%, surpassing other models by significant margins of 12.2% and 19.0%, respectively.</description><subject>Ablation</subject><subject>Accuracy</subject><subject>Computer vision</subject><subject>Critical components</subject><subject>Deep learning</subject><subject>DySample</subject><subject>Efficiency</subject><subject>Hogs</subject><subject>Livestock</subject><subject>multi-head self-attention</subject><subject>Open access publishing</subject><subject>pig detection and counting</subject><subject>Systems science</subject><subject>YOLOv5-MHSA-DS</subject><issn>2164-2583</issn><issn>2164-2583</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>0YH</sourceid><sourceid>DOA</sourceid><recordid>eNp9UE1LAzEUDKJgqf4EYcHz1nzubjyIpX60UOmhevAUstmkpmw3NZsq_fdm3SqezOWFeTPzhgHgAsERggW8wiijmBVkhCGmI0w4pbg4AoMOT7vF8Z__KThv2zWMr2AoMgfg5nUxX3yw9Gm6HKd3y-tENok2xiqrm5Bs7SqpdNAqWNfEVZUot2uCbVbJRoc3V52BEyPrVp8f5hC8PNw_T6bpfPE4m4znqSKch5SbnOaco0zGcJpCRAhhecWLkhcEU2hwlsNCQYIhVyaPmtxggonMUMRkSYZg1vtWTq7F1tuN9HvhpBXfgPMrIX2wqtYC4pxAXTLGWUlpBktKCGSllLSkMoumQ3DZe229e9_pNoi12_kmxhcEUcZjNoQii_Us5V3bem1-ryIouubFT_Oia14cmo-6215nG-P8Rn46X1ciyH3tvPGyUbY786_FF5NphCU</recordid><startdate>20241231</startdate><enddate>20241231</enddate><creator>Hao, Wangli</creator><creator>Zhang, Li</creator><creator>Xu, Shu-ai</creator><creator>Han, Meng</creator><creator>Li, Fuzhong</creator><creator>Yang, Hua</creator><general>Taylor &amp; Francis</general><general>Taylor &amp; Francis Ltd</general><general>Taylor &amp; Francis Group</general><scope>0YH</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>8FD</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M2O</scope><scope>MBDVC</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>DOA</scope></search><sort><creationdate>20241231</creationdate><title>YOLOv5-MHSA-DS: an efficient pig detection and counting method</title><author>Hao, Wangli ; Zhang, Li ; Xu, Shu-ai ; Han, Meng ; Li, Fuzhong ; Yang, Hua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c399t-9f7479916a428e40133357d98b983240f26708c03209cf73997f2323a61032ab3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Ablation</topic><topic>Accuracy</topic><topic>Computer vision</topic><topic>Critical components</topic><topic>Deep learning</topic><topic>DySample</topic><topic>Efficiency</topic><topic>Hogs</topic><topic>Livestock</topic><topic>multi-head self-attention</topic><topic>Open access publishing</topic><topic>pig detection and counting</topic><topic>Systems science</topic><topic>YOLOv5-MHSA-DS</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hao, Wangli</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Xu, Shu-ai</creatorcontrib><creatorcontrib>Han, Meng</creatorcontrib><creatorcontrib>Li, Fuzhong</creatorcontrib><creatorcontrib>Yang, Hua</creatorcontrib><collection>Taylor &amp; Francis Open Access</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Technology Research Database</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Research Library</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>ProQuest Research Library</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Systems science &amp; control engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hao, Wangli</au><au>Zhang, Li</au><au>Xu, Shu-ai</au><au>Han, Meng</au><au>Li, Fuzhong</au><au>Yang, Hua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>YOLOv5-MHSA-DS: an efficient pig detection and counting method</atitle><jtitle>Systems science &amp; control engineering</jtitle><date>2024-12-31</date><risdate>2024</risdate><volume>12</volume><issue>1</issue><issn>2164-2583</issn><eissn>2164-2583</eissn><abstract>Accurate and efficient livestock detection and counting are crucial for agricultural intelligence. To address the obstacles created by traditional manual methods and limitations of current vision technology, we introduce YOLOv5-MHSA-DS, a novel model that integrates YOLOv5 framework with Multi-Head Self-Attention and DySample modules. Multi-Head Self-Attention excels at capturing diverse features, enhancing pig detection and counting accuracy. On the other hand, DySample dynamically adjusts sampling strategies based on input data, allowing it to focus on the most critical parts of the image and thereby significantly improving pig detection and counting performance. To validate the generalization and robustness of our proposed model, we conducted ablation experiments. The results demonstrate that YOLOv5-MHSA-DS achieves an impressive mAP of 93.8% and counting accuracy of 95.0%, surpassing other models by significant margins of 12.2% and 19.0%, respectively.</abstract><cop>Macclesfield</cop><pub>Taylor &amp; Francis</pub><doi>10.1080/21642583.2024.2394428</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2164-2583
ispartof Systems science & control engineering, 2024-12, Vol.12 (1)
issn 2164-2583
2164-2583
language eng
recordid cdi_proquest_journals_3145933311
source Taylor & Francis Open Access
subjects Ablation
Accuracy
Computer vision
Critical components
Deep learning
DySample
Efficiency
Hogs
Livestock
multi-head self-attention
Open access publishing
pig detection and counting
Systems science
YOLOv5-MHSA-DS
title YOLOv5-MHSA-DS: an efficient pig detection and counting method
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-03-06T05%3A51%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=YOLOv5-MHSA-DS:%20an%20efficient%20pig%20detection%20and%20counting%20method&rft.jtitle=Systems%20science%20&%20control%20engineering&rft.au=Hao,%20Wangli&rft.date=2024-12-31&rft.volume=12&rft.issue=1&rft.issn=2164-2583&rft.eissn=2164-2583&rft_id=info:doi/10.1080/21642583.2024.2394428&rft_dat=%3Cproquest_cross%3E3145933311%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c399t-9f7479916a428e40133357d98b983240f26708c03209cf73997f2323a61032ab3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3145933311&rft_id=info:pmid/&rfr_iscdi=true