Loading…
Deep active learning framework for chest-abdominal CT scans segmentation
State-of-the-art deep learning approaches for segmentation require large, high-quality annotated datasets for training and evaluation. However, the manual processes for creating these datasets are time-consuming, labor-intensive, and especially costly in medical imaging. To address this, we utilize...
Saved in:
Published in: | Expert systems with applications 2025-03, Vol.263, p.125522, Article 125522 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c181t-617820e5f88b47d5ca39b7f7473c9b9ff5bda836bc770d2baa9fec9208ea1f373 |
container_end_page | |
container_issue | |
container_start_page | 125522 |
container_title | Expert systems with applications |
container_volume | 263 |
creator | Rokach, Lital Aperstein, Yehudit Akselrod-Ballin, Ayelet |
description | State-of-the-art deep learning approaches for segmentation require large, high-quality annotated datasets for training and evaluation. However, the manual processes for creating these datasets are time-consuming, labor-intensive, and especially costly in medical imaging. To address this, we utilize an active learning (AL) strategy leveraging the power of machine learning to actively select informative samples from the unlabeled data for annotation. Our contributions include the creation and automation of a deep learning flow, applied to solve organ segmentation tasks in chest-abdominal CT scans leveraging AL. We have established an AL framework that automates the training of semantic segmentation models by prioritizing the labeling of images that are most valuable for the segmentation task. Our solution, designed for specific anatomical structures, has been further evaluated by incorporating additional organs, datasets, and AL acquisition functions. This adaptable framework can accommodate new organ segmentation tasks, different anatomies, and varied parameters within the AL process while reducing the annotation burden in medical imaging and allowing generalization to other application domains. We demonstrate the tangible benefits of the AL approach in terms of reduction in annotation cost and training time by the AL flow for a given supervised learning medical segmentation task. Our method achieved segmentation results for spleen (Dice score 0.908 [0.895,0.921], mIoU 0.833 [0.816, 0.849]) and liver (Dice score 0.855 [0.848, 0.861], mIoU 0.785 [0.777, 0.792]) using 2.5% of data available for training. |
doi_str_mv | 10.1016/j.eswa.2024.125522 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_eswa_2024_125522</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0957417424023893</els_id><sourcerecordid>S0957417424023893</sourcerecordid><originalsourceid>FETCH-LOGICAL-c181t-617820e5f88b47d5ca39b7f7473c9b9ff5bda836bc770d2baa9fec9208ea1f373</originalsourceid><addsrcrecordid>eNp9z71OwzAUhmEPIFEKN8DkG0iwnR8nEgsKP0WqxFJm69g5Lg6JU9lRK-6eRunMdKb36HsIeeAs5YyXj12K8QSpYCJPuSgKIa7IitWFTHIu8xtyG2PHGJeMyRXZvCAeKJjJHZH2CME7v6c2wICnMfxQOwZqvjFOCeh2HJyHnjY7Gg34SCPuB_QTTG70d-TaQh_x_nLX5Ovtdddsku3n-0fzvE0Mr_iUlFxWgmFhq0rnsi0MZLWWVuYyM7WurS10C1VWaiMla4UGqC2aWrAKgdtMZmsilr8mjDEGtOoQ3ADhV3GmZr_q1OxXs18t_nP0tER4XnZ0GFQ0Dr3B1gU0k2pH91_-B-1vZws</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep active learning framework for chest-abdominal CT scans segmentation</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Rokach, Lital ; Aperstein, Yehudit ; Akselrod-Ballin, Ayelet</creator><creatorcontrib>Rokach, Lital ; Aperstein, Yehudit ; Akselrod-Ballin, Ayelet</creatorcontrib><description>State-of-the-art deep learning approaches for segmentation require large, high-quality annotated datasets for training and evaluation. However, the manual processes for creating these datasets are time-consuming, labor-intensive, and especially costly in medical imaging. To address this, we utilize an active learning (AL) strategy leveraging the power of machine learning to actively select informative samples from the unlabeled data for annotation. Our contributions include the creation and automation of a deep learning flow, applied to solve organ segmentation tasks in chest-abdominal CT scans leveraging AL. We have established an AL framework that automates the training of semantic segmentation models by prioritizing the labeling of images that are most valuable for the segmentation task. Our solution, designed for specific anatomical structures, has been further evaluated by incorporating additional organs, datasets, and AL acquisition functions. This adaptable framework can accommodate new organ segmentation tasks, different anatomies, and varied parameters within the AL process while reducing the annotation burden in medical imaging and allowing generalization to other application domains. We demonstrate the tangible benefits of the AL approach in terms of reduction in annotation cost and training time by the AL flow for a given supervised learning medical segmentation task. Our method achieved segmentation results for spleen (Dice score 0.908 [0.895,0.921], mIoU 0.833 [0.816, 0.849]) and liver (Dice score 0.855 [0.848, 0.861], mIoU 0.785 [0.777, 0.792]) using 2.5% of data available for training.</description><identifier>ISSN: 0957-4174</identifier><identifier>DOI: 10.1016/j.eswa.2024.125522</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Active learning ; Artificial intelligence ; CT segmentation ; Neural networks ; Organ segmentation ; Semi-supervised learning</subject><ispartof>Expert systems with applications, 2025-03, Vol.263, p.125522, Article 125522</ispartof><rights>2024 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c181t-617820e5f88b47d5ca39b7f7473c9b9ff5bda836bc770d2baa9fec9208ea1f373</cites><orcidid>0000-0001-6390-9463 ; 0009-0007-9079-4420</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Rokach, Lital</creatorcontrib><creatorcontrib>Aperstein, Yehudit</creatorcontrib><creatorcontrib>Akselrod-Ballin, Ayelet</creatorcontrib><title>Deep active learning framework for chest-abdominal CT scans segmentation</title><title>Expert systems with applications</title><description>State-of-the-art deep learning approaches for segmentation require large, high-quality annotated datasets for training and evaluation. However, the manual processes for creating these datasets are time-consuming, labor-intensive, and especially costly in medical imaging. To address this, we utilize an active learning (AL) strategy leveraging the power of machine learning to actively select informative samples from the unlabeled data for annotation. Our contributions include the creation and automation of a deep learning flow, applied to solve organ segmentation tasks in chest-abdominal CT scans leveraging AL. We have established an AL framework that automates the training of semantic segmentation models by prioritizing the labeling of images that are most valuable for the segmentation task. Our solution, designed for specific anatomical structures, has been further evaluated by incorporating additional organs, datasets, and AL acquisition functions. This adaptable framework can accommodate new organ segmentation tasks, different anatomies, and varied parameters within the AL process while reducing the annotation burden in medical imaging and allowing generalization to other application domains. We demonstrate the tangible benefits of the AL approach in terms of reduction in annotation cost and training time by the AL flow for a given supervised learning medical segmentation task. Our method achieved segmentation results for spleen (Dice score 0.908 [0.895,0.921], mIoU 0.833 [0.816, 0.849]) and liver (Dice score 0.855 [0.848, 0.861], mIoU 0.785 [0.777, 0.792]) using 2.5% of data available for training.</description><subject>Active learning</subject><subject>Artificial intelligence</subject><subject>CT segmentation</subject><subject>Neural networks</subject><subject>Organ segmentation</subject><subject>Semi-supervised learning</subject><issn>0957-4174</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNp9z71OwzAUhmEPIFEKN8DkG0iwnR8nEgsKP0WqxFJm69g5Lg6JU9lRK-6eRunMdKb36HsIeeAs5YyXj12K8QSpYCJPuSgKIa7IitWFTHIu8xtyG2PHGJeMyRXZvCAeKJjJHZH2CME7v6c2wICnMfxQOwZqvjFOCeh2HJyHnjY7Gg34SCPuB_QTTG70d-TaQh_x_nLX5Ovtdddsku3n-0fzvE0Mr_iUlFxWgmFhq0rnsi0MZLWWVuYyM7WurS10C1VWaiMla4UGqC2aWrAKgdtMZmsilr8mjDEGtOoQ3ADhV3GmZr_q1OxXs18t_nP0tER4XnZ0GFQ0Dr3B1gU0k2pH91_-B-1vZws</recordid><startdate>20250305</startdate><enddate>20250305</enddate><creator>Rokach, Lital</creator><creator>Aperstein, Yehudit</creator><creator>Akselrod-Ballin, Ayelet</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-6390-9463</orcidid><orcidid>https://orcid.org/0009-0007-9079-4420</orcidid></search><sort><creationdate>20250305</creationdate><title>Deep active learning framework for chest-abdominal CT scans segmentation</title><author>Rokach, Lital ; Aperstein, Yehudit ; Akselrod-Ballin, Ayelet</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c181t-617820e5f88b47d5ca39b7f7473c9b9ff5bda836bc770d2baa9fec9208ea1f373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Active learning</topic><topic>Artificial intelligence</topic><topic>CT segmentation</topic><topic>Neural networks</topic><topic>Organ segmentation</topic><topic>Semi-supervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rokach, Lital</creatorcontrib><creatorcontrib>Aperstein, Yehudit</creatorcontrib><creatorcontrib>Akselrod-Ballin, Ayelet</creatorcontrib><collection>CrossRef</collection><jtitle>Expert systems with applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rokach, Lital</au><au>Aperstein, Yehudit</au><au>Akselrod-Ballin, Ayelet</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep active learning framework for chest-abdominal CT scans segmentation</atitle><jtitle>Expert systems with applications</jtitle><date>2025-03-05</date><risdate>2025</risdate><volume>263</volume><spage>125522</spage><pages>125522-</pages><artnum>125522</artnum><issn>0957-4174</issn><abstract>State-of-the-art deep learning approaches for segmentation require large, high-quality annotated datasets for training and evaluation. However, the manual processes for creating these datasets are time-consuming, labor-intensive, and especially costly in medical imaging. To address this, we utilize an active learning (AL) strategy leveraging the power of machine learning to actively select informative samples from the unlabeled data for annotation. Our contributions include the creation and automation of a deep learning flow, applied to solve organ segmentation tasks in chest-abdominal CT scans leveraging AL. We have established an AL framework that automates the training of semantic segmentation models by prioritizing the labeling of images that are most valuable for the segmentation task. Our solution, designed for specific anatomical structures, has been further evaluated by incorporating additional organs, datasets, and AL acquisition functions. This adaptable framework can accommodate new organ segmentation tasks, different anatomies, and varied parameters within the AL process while reducing the annotation burden in medical imaging and allowing generalization to other application domains. We demonstrate the tangible benefits of the AL approach in terms of reduction in annotation cost and training time by the AL flow for a given supervised learning medical segmentation task. Our method achieved segmentation results for spleen (Dice score 0.908 [0.895,0.921], mIoU 0.833 [0.816, 0.849]) and liver (Dice score 0.855 [0.848, 0.861], mIoU 0.785 [0.777, 0.792]) using 2.5% of data available for training.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.eswa.2024.125522</doi><orcidid>https://orcid.org/0000-0001-6390-9463</orcidid><orcidid>https://orcid.org/0009-0007-9079-4420</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0957-4174 |
ispartof | Expert systems with applications, 2025-03, Vol.263, p.125522, Article 125522 |
issn | 0957-4174 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_eswa_2024_125522 |
source | ScienceDirect Freedom Collection 2022-2024 |
subjects | Active learning Artificial intelligence CT segmentation Neural networks Organ segmentation Semi-supervised learning |
title | Deep active learning framework for chest-abdominal CT scans segmentation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T23%3A47%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20active%20learning%20framework%20for%20chest-abdominal%20CT%20scans%20segmentation&rft.jtitle=Expert%20systems%20with%20applications&rft.au=Rokach,%20Lital&rft.date=2025-03-05&rft.volume=263&rft.spage=125522&rft.pages=125522-&rft.artnum=125522&rft.issn=0957-4174&rft_id=info:doi/10.1016/j.eswa.2024.125522&rft_dat=%3Celsevier_cross%3ES0957417424023893%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c181t-617820e5f88b47d5ca39b7f7473c9b9ff5bda836bc770d2baa9fec9208ea1f373%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |