Loading…

Comparison of different CNNs for breast tumor classification from ultrasound images

Breast cancer is one of the deadliest cancer worldwide. Timely detection could reduce mortality rates. In the clinical routine, classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task. An automated method, which can deal with the variability of data is...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-12
Main Authors: Lazo, Jorge F, Moccia, Sara, Frontoni, Emanuele, De Momi, Elena
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Lazo, Jorge F
Moccia, Sara
Frontoni, Emanuele
De Momi, Elena
description Breast cancer is one of the deadliest cancer worldwide. Timely detection could reduce mortality rates. In the clinical routine, classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task. An automated method, which can deal with the variability of data is therefore needed. In this paper, we compared different Convolutional Neural Networks (CNNs) and transfer learning methods for the task of automated breast tumor classification. The architectures investigated in this study were VGG-16 and Inception V3. Two different training strategies were investigated: the first one was using pretrained models as feature extractors and the second one was to fine-tune the pre-trained models. A total of 947 images were used, 587 corresponded to US images of benign tumors and 360 with malignant tumors. 678 images were used for the training and validation process, while 269 images were used for testing the models. Accuracy and Area Under the receiver operating characteristic Curve (AUC) were used as performance metrics. The best performance was obtained by fine tuning VGG-16, with an accuracy of 0.919 and an AUC of 0.934. The obtained results open the opportunity to further investigation with a view of improving cancer detection.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2474510645</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2474510645</sourcerecordid><originalsourceid>FETCH-proquest_journals_24745106453</originalsourceid><addsrcrecordid>eNqNi70KwjAURoMgWLTvEHAupPlp3Yvi1EX3EttEUpqk3pu8vx18AKePwznfjhRciLq6SM4PpEScGWO8ablSoiCPLvpVg8MYaLR0ctYaMCHRru-R2gj0BUZjoin7DcZFIzrrRp3c9rAQPc1LAo0xh4k6r98GT2Rv9YKm_O2RnG_XZ3evVoifbDANc8wQNjVw2UpVs0Yq8V_1BXKcQKM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2474510645</pqid></control><display><type>article</type><title>Comparison of different CNNs for breast tumor classification from ultrasound images</title><source>Publicly Available Content Database</source><creator>Lazo, Jorge F ; Moccia, Sara ; Frontoni, Emanuele ; De Momi, Elena</creator><creatorcontrib>Lazo, Jorge F ; Moccia, Sara ; Frontoni, Emanuele ; De Momi, Elena</creatorcontrib><description>Breast cancer is one of the deadliest cancer worldwide. Timely detection could reduce mortality rates. In the clinical routine, classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task. An automated method, which can deal with the variability of data is therefore needed. In this paper, we compared different Convolutional Neural Networks (CNNs) and transfer learning methods for the task of automated breast tumor classification. The architectures investigated in this study were VGG-16 and Inception V3. Two different training strategies were investigated: the first one was using pretrained models as feature extractors and the second one was to fine-tune the pre-trained models. A total of 947 images were used, 587 corresponded to US images of benign tumors and 360 with malignant tumors. 678 images were used for the training and validation process, while 269 images were used for testing the models. Accuracy and Area Under the receiver operating characteristic Curve (AUC) were used as performance metrics. The best performance was obtained by fine tuning VGG-16, with an accuracy of 0.919 and an AUC of 0.934. The obtained results open the opportunity to further investigation with a view of improving cancer detection.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Automation ; Cancer ; Feature extraction ; Image classification ; Investigations ; Model accuracy ; Performance measurement ; Training ; Tumors ; Ultrasonic imaging ; Ultrasound</subject><ispartof>arXiv.org, 2020-12</ispartof><rights>2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2474510645?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Lazo, Jorge F</creatorcontrib><creatorcontrib>Moccia, Sara</creatorcontrib><creatorcontrib>Frontoni, Emanuele</creatorcontrib><creatorcontrib>De Momi, Elena</creatorcontrib><title>Comparison of different CNNs for breast tumor classification from ultrasound images</title><title>arXiv.org</title><description>Breast cancer is one of the deadliest cancer worldwide. Timely detection could reduce mortality rates. In the clinical routine, classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task. An automated method, which can deal with the variability of data is therefore needed. In this paper, we compared different Convolutional Neural Networks (CNNs) and transfer learning methods for the task of automated breast tumor classification. The architectures investigated in this study were VGG-16 and Inception V3. Two different training strategies were investigated: the first one was using pretrained models as feature extractors and the second one was to fine-tune the pre-trained models. A total of 947 images were used, 587 corresponded to US images of benign tumors and 360 with malignant tumors. 678 images were used for the training and validation process, while 269 images were used for testing the models. Accuracy and Area Under the receiver operating characteristic Curve (AUC) were used as performance metrics. The best performance was obtained by fine tuning VGG-16, with an accuracy of 0.919 and an AUC of 0.934. The obtained results open the opportunity to further investigation with a view of improving cancer detection.</description><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Cancer</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Investigations</subject><subject>Model accuracy</subject><subject>Performance measurement</subject><subject>Training</subject><subject>Tumors</subject><subject>Ultrasonic imaging</subject><subject>Ultrasound</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi70KwjAURoMgWLTvEHAupPlp3Yvi1EX3EttEUpqk3pu8vx18AKePwznfjhRciLq6SM4PpEScGWO8ablSoiCPLvpVg8MYaLR0ctYaMCHRru-R2gj0BUZjoin7DcZFIzrrRp3c9rAQPc1LAo0xh4k6r98GT2Rv9YKm_O2RnG_XZ3evVoifbDANc8wQNjVw2UpVs0Yq8V_1BXKcQKM</recordid><startdate>20201228</startdate><enddate>20201228</enddate><creator>Lazo, Jorge F</creator><creator>Moccia, Sara</creator><creator>Frontoni, Emanuele</creator><creator>De Momi, Elena</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201228</creationdate><title>Comparison of different CNNs for breast tumor classification from ultrasound images</title><author>Lazo, Jorge F ; Moccia, Sara ; Frontoni, Emanuele ; De Momi, Elena</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24745106453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Cancer</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Investigations</topic><topic>Model accuracy</topic><topic>Performance measurement</topic><topic>Training</topic><topic>Tumors</topic><topic>Ultrasonic imaging</topic><topic>Ultrasound</topic><toplevel>online_resources</toplevel><creatorcontrib>Lazo, Jorge F</creatorcontrib><creatorcontrib>Moccia, Sara</creatorcontrib><creatorcontrib>Frontoni, Emanuele</creatorcontrib><creatorcontrib>De Momi, Elena</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lazo, Jorge F</au><au>Moccia, Sara</au><au>Frontoni, Emanuele</au><au>De Momi, Elena</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Comparison of different CNNs for breast tumor classification from ultrasound images</atitle><jtitle>arXiv.org</jtitle><date>2020-12-28</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Breast cancer is one of the deadliest cancer worldwide. Timely detection could reduce mortality rates. In the clinical routine, classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task. An automated method, which can deal with the variability of data is therefore needed. In this paper, we compared different Convolutional Neural Networks (CNNs) and transfer learning methods for the task of automated breast tumor classification. The architectures investigated in this study were VGG-16 and Inception V3. Two different training strategies were investigated: the first one was using pretrained models as feature extractors and the second one was to fine-tune the pre-trained models. A total of 947 images were used, 587 corresponded to US images of benign tumors and 360 with malignant tumors. 678 images were used for the training and validation process, while 269 images were used for testing the models. Accuracy and Area Under the receiver operating characteristic Curve (AUC) were used as performance metrics. The best performance was obtained by fine tuning VGG-16, with an accuracy of 0.919 and an AUC of 0.934. The obtained results open the opportunity to further investigation with a view of improving cancer detection.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2474510645
source Publicly Available Content Database
subjects Artificial neural networks
Automation
Cancer
Feature extraction
Image classification
Investigations
Model accuracy
Performance measurement
Training
Tumors
Ultrasonic imaging
Ultrasound
title Comparison of different CNNs for breast tumor classification from ultrasound images
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T18%3A10%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Comparison%20of%20different%20CNNs%20for%20breast%20tumor%20classification%20from%20ultrasound%20images&rft.jtitle=arXiv.org&rft.au=Lazo,%20Jorge%20F&rft.date=2020-12-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2474510645%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_24745106453%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2474510645&rft_id=info:pmid/&rfr_iscdi=true