Loading…
YOLOv9 for fracture detection in pediatric wrist trauma X‐ray images
The introduction of YOLOv9, the latest version of the you only look once (YOLO) series, has led to its widespread adoption across various scenarios. This paper is the first to apply the YOLOv9 algorithm model to the fracture detection task as computer‐assisted diagnosis to help radiologists and surg...
Saved in:
Published in: | Electronics letters 2024-06, Vol.60 (11), p.n/a |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c4018-63ff6bd780e9cbdf9782168b1b2326d5bbf4496acdd772bb578d33df2bedbbfb3 |
container_end_page | n/a |
container_issue | 11 |
container_start_page | |
container_title | Electronics letters |
container_volume | 60 |
creator | Chien, Chun‐Tse Ju, Rui‐Yang Chou, Kuang‐Yi Chiang, Jen‐Shiun |
description | The introduction of YOLOv9, the latest version of the you only look once (YOLO) series, has led to its widespread adoption across various scenarios. This paper is the first to apply the YOLOv9 algorithm model to the fracture detection task as computer‐assisted diagnosis to help radiologists and surgeons to interpret X‐ray images. Specifically, this paper trained the model on the GRAZPEDWRI‐DX dataset and extended the training set using data augmentation techniques to improve the model performance. Experimental results demonstrate that compared to the mAP 50–95 of the current state‐of‐the‐art model, the YOLOv9 model increased the value from 42.16% to 43.73%, with an improvement of 3.7%. The implementation code is publicly available at https://github.com/RuiyangJu/YOLOv9‐Fracture‐Detection.
YOLOv9 is the latest version of the you only look once (YOLO) series of object detection algorithms released in February 2024. This paper presents a framework for pediatric wrist fracture detection utilizing YOLOv9, which enhances model performance to achieve the state‐of‐the‐art level through training on the GRAZPEDWRI‐DX dataset. This is significant because misinterpretation of fracture X‐ray images may lead to surgery failure and more harm to the patients. The hope is to use artificial intelligence technology to prevent these incidents from occurring. |
doi_str_mv | 10.1049/ell2.13248 |
format | article |
fullrecord | <record><control><sourceid>wiley_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_b72a9532d66e4fb8af60ee7657f14611</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_b72a9532d66e4fb8af60ee7657f14611</doaj_id><sourcerecordid>ELL213248</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4018-63ff6bd780e9cbdf9782168b1b2326d5bbf4496acdd772bb578d33df2bedbbfb3</originalsourceid><addsrcrecordid>eNp9kM9KAzEQh4MoWKsXnyBnYWsm2U02RymtFhZ6UainkL8lZdst2a2lNx_BZ_RJ3Lbi0dMwM998Az-E7oGMgOTy0dc1HQGjeXmBBsAKkkmAxSUaEAIsK0Dm1-imbVd9S6UUAzR9n1fzD4lDk3BI2na75LHznbddbDY4bvDWu6i7FC3ep9h2uEt6t9Z48f35lfQBx7Ve-vYWXQVdt_7utw7R23TyOn7JqvnzbPxUZTYnUGachcCNEyXx0hoXpCgp8NKAoYxyVxgT8lxybZ0TghpTiNIx5gI13vU7w4Zodva6Rq_UNvXf00E1OqrToElLpVMXbe2VEVTLglHHuc-DKXXgxHvBCxEg5wC96-Hssqlp2-TDnw-IOqapjmmqU5o9DGd4H2t_-IdUk6qi55sf_XZ4Vg</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>YOLOv9 for fracture detection in pediatric wrist trauma X‐ray images</title><source>IET Digital Library - eJournals</source><source>Open Access: Wiley-Blackwell Open Access Journals</source><creator>Chien, Chun‐Tse ; Ju, Rui‐Yang ; Chou, Kuang‐Yi ; Chiang, Jen‐Shiun</creator><creatorcontrib>Chien, Chun‐Tse ; Ju, Rui‐Yang ; Chou, Kuang‐Yi ; Chiang, Jen‐Shiun</creatorcontrib><description>The introduction of YOLOv9, the latest version of the you only look once (YOLO) series, has led to its widespread adoption across various scenarios. This paper is the first to apply the YOLOv9 algorithm model to the fracture detection task as computer‐assisted diagnosis to help radiologists and surgeons to interpret X‐ray images. Specifically, this paper trained the model on the GRAZPEDWRI‐DX dataset and extended the training set using data augmentation techniques to improve the model performance. Experimental results demonstrate that compared to the mAP 50–95 of the current state‐of‐the‐art model, the YOLOv9 model increased the value from 42.16% to 43.73%, with an improvement of 3.7%. The implementation code is publicly available at https://github.com/RuiyangJu/YOLOv9‐Fracture‐Detection.
YOLOv9 is the latest version of the you only look once (YOLO) series of object detection algorithms released in February 2024. This paper presents a framework for pediatric wrist fracture detection utilizing YOLOv9, which enhances model performance to achieve the state‐of‐the‐art level through training on the GRAZPEDWRI‐DX dataset. This is significant because misinterpretation of fracture X‐ray images may lead to surgery failure and more harm to the patients. The hope is to use artificial intelligence technology to prevent these incidents from occurring.</description><identifier>ISSN: 0013-5194</identifier><identifier>EISSN: 1350-911X</identifier><identifier>DOI: 10.1049/ell2.13248</identifier><language>eng</language><publisher>Wiley</publisher><subject>biomedical imaging ; computer vision ; object detection ; X‐ray detection</subject><ispartof>Electronics letters, 2024-06, Vol.60 (11), p.n/a</ispartof><rights>2024 The Author(s). published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c4018-63ff6bd780e9cbdf9782168b1b2326d5bbf4496acdd772bb578d33df2bedbbfb3</cites><orcidid>0000-0001-7536-8967 ; 0000-0003-2240-1377</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1049%2Fell2.13248$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1049%2Fell2.13248$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,11542,27903,27904,46030,46454</link.rule.ids></links><search><creatorcontrib>Chien, Chun‐Tse</creatorcontrib><creatorcontrib>Ju, Rui‐Yang</creatorcontrib><creatorcontrib>Chou, Kuang‐Yi</creatorcontrib><creatorcontrib>Chiang, Jen‐Shiun</creatorcontrib><title>YOLOv9 for fracture detection in pediatric wrist trauma X‐ray images</title><title>Electronics letters</title><description>The introduction of YOLOv9, the latest version of the you only look once (YOLO) series, has led to its widespread adoption across various scenarios. This paper is the first to apply the YOLOv9 algorithm model to the fracture detection task as computer‐assisted diagnosis to help radiologists and surgeons to interpret X‐ray images. Specifically, this paper trained the model on the GRAZPEDWRI‐DX dataset and extended the training set using data augmentation techniques to improve the model performance. Experimental results demonstrate that compared to the mAP 50–95 of the current state‐of‐the‐art model, the YOLOv9 model increased the value from 42.16% to 43.73%, with an improvement of 3.7%. The implementation code is publicly available at https://github.com/RuiyangJu/YOLOv9‐Fracture‐Detection.
YOLOv9 is the latest version of the you only look once (YOLO) series of object detection algorithms released in February 2024. This paper presents a framework for pediatric wrist fracture detection utilizing YOLOv9, which enhances model performance to achieve the state‐of‐the‐art level through training on the GRAZPEDWRI‐DX dataset. This is significant because misinterpretation of fracture X‐ray images may lead to surgery failure and more harm to the patients. The hope is to use artificial intelligence technology to prevent these incidents from occurring.</description><subject>biomedical imaging</subject><subject>computer vision</subject><subject>object detection</subject><subject>X‐ray detection</subject><issn>0013-5194</issn><issn>1350-911X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><sourceid>DOA</sourceid><recordid>eNp9kM9KAzEQh4MoWKsXnyBnYWsm2U02RymtFhZ6UainkL8lZdst2a2lNx_BZ_RJ3Lbi0dMwM998Az-E7oGMgOTy0dc1HQGjeXmBBsAKkkmAxSUaEAIsK0Dm1-imbVd9S6UUAzR9n1fzD4lDk3BI2na75LHznbddbDY4bvDWu6i7FC3ep9h2uEt6t9Z48f35lfQBx7Ve-vYWXQVdt_7utw7R23TyOn7JqvnzbPxUZTYnUGachcCNEyXx0hoXpCgp8NKAoYxyVxgT8lxybZ0TghpTiNIx5gI13vU7w4Zodva6Rq_UNvXf00E1OqrToElLpVMXbe2VEVTLglHHuc-DKXXgxHvBCxEg5wC96-Hssqlp2-TDnw-IOqapjmmqU5o9DGd4H2t_-IdUk6qi55sf_XZ4Vg</recordid><startdate>202406</startdate><enddate>202406</enddate><creator>Chien, Chun‐Tse</creator><creator>Ju, Rui‐Yang</creator><creator>Chou, Kuang‐Yi</creator><creator>Chiang, Jen‐Shiun</creator><general>Wiley</general><scope>24P</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-7536-8967</orcidid><orcidid>https://orcid.org/0000-0003-2240-1377</orcidid></search><sort><creationdate>202406</creationdate><title>YOLOv9 for fracture detection in pediatric wrist trauma X‐ray images</title><author>Chien, Chun‐Tse ; Ju, Rui‐Yang ; Chou, Kuang‐Yi ; Chiang, Jen‐Shiun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4018-63ff6bd780e9cbdf9782168b1b2326d5bbf4496acdd772bb578d33df2bedbbfb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>biomedical imaging</topic><topic>computer vision</topic><topic>object detection</topic><topic>X‐ray detection</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chien, Chun‐Tse</creatorcontrib><creatorcontrib>Ju, Rui‐Yang</creatorcontrib><creatorcontrib>Chou, Kuang‐Yi</creatorcontrib><creatorcontrib>Chiang, Jen‐Shiun</creatorcontrib><collection>Open Access: Wiley-Blackwell Open Access Journals</collection><collection>CrossRef</collection><collection>Open Access: DOAJ - Directory of Open Access Journals</collection><jtitle>Electronics letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chien, Chun‐Tse</au><au>Ju, Rui‐Yang</au><au>Chou, Kuang‐Yi</au><au>Chiang, Jen‐Shiun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>YOLOv9 for fracture detection in pediatric wrist trauma X‐ray images</atitle><jtitle>Electronics letters</jtitle><date>2024-06</date><risdate>2024</risdate><volume>60</volume><issue>11</issue><epage>n/a</epage><issn>0013-5194</issn><eissn>1350-911X</eissn><abstract>The introduction of YOLOv9, the latest version of the you only look once (YOLO) series, has led to its widespread adoption across various scenarios. This paper is the first to apply the YOLOv9 algorithm model to the fracture detection task as computer‐assisted diagnosis to help radiologists and surgeons to interpret X‐ray images. Specifically, this paper trained the model on the GRAZPEDWRI‐DX dataset and extended the training set using data augmentation techniques to improve the model performance. Experimental results demonstrate that compared to the mAP 50–95 of the current state‐of‐the‐art model, the YOLOv9 model increased the value from 42.16% to 43.73%, with an improvement of 3.7%. The implementation code is publicly available at https://github.com/RuiyangJu/YOLOv9‐Fracture‐Detection.
YOLOv9 is the latest version of the you only look once (YOLO) series of object detection algorithms released in February 2024. This paper presents a framework for pediatric wrist fracture detection utilizing YOLOv9, which enhances model performance to achieve the state‐of‐the‐art level through training on the GRAZPEDWRI‐DX dataset. This is significant because misinterpretation of fracture X‐ray images may lead to surgery failure and more harm to the patients. The hope is to use artificial intelligence technology to prevent these incidents from occurring.</abstract><pub>Wiley</pub><doi>10.1049/ell2.13248</doi><tpages>3</tpages><orcidid>https://orcid.org/0000-0001-7536-8967</orcidid><orcidid>https://orcid.org/0000-0003-2240-1377</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0013-5194 |
ispartof | Electronics letters, 2024-06, Vol.60 (11), p.n/a |
issn | 0013-5194 1350-911X |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_b72a9532d66e4fb8af60ee7657f14611 |
source | IET Digital Library - eJournals; Open Access: Wiley-Blackwell Open Access Journals |
subjects | biomedical imaging computer vision object detection X‐ray detection |
title | YOLOv9 for fracture detection in pediatric wrist trauma X‐ray images |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T00%3A41%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-wiley_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=YOLOv9%20for%20fracture%20detection%20in%20pediatric%20wrist%20trauma%20X%E2%80%90ray%20images&rft.jtitle=Electronics%20letters&rft.au=Chien,%20Chun%E2%80%90Tse&rft.date=2024-06&rft.volume=60&rft.issue=11&rft.epage=n/a&rft.issn=0013-5194&rft.eissn=1350-911X&rft_id=info:doi/10.1049/ell2.13248&rft_dat=%3Cwiley_doaj_%3EELL213248%3C/wiley_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c4018-63ff6bd780e9cbdf9782168b1b2326d5bbf4496acdd772bb578d33df2bedbbfb3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |