Loading…

Few‐shot learning with deformable convolution for multiscale lesion detection in mammography

Purpose Image‐based breast lesion detection is a powerful clinical diagnosis technology. In recent years, deep learning architectures have achieved considerable success in medical image analysis however, they always require large‐scale samples. In mammography images, breast lesions are inconspicuous...

Full description

Saved in:
Bibliographic Details
Published in:Medical physics (Lancaster) 2020-07, Vol.47 (7), p.2970-2985
Main Authors: Li, Ce, Zhang, Dong, Tian, Zhiqiang, Du, Shaoyi, Qu, Yanyun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c3219-2dff0d0c2f2d2f1c08a645d26024c2ab205c35b82aeab55128b4a73ad49f5f003
cites cdi_FETCH-LOGICAL-c3219-2dff0d0c2f2d2f1c08a645d26024c2ab205c35b82aeab55128b4a73ad49f5f003
container_end_page 2985
container_issue 7
container_start_page 2970
container_title Medical physics (Lancaster)
container_volume 47
creator Li, Ce
Zhang, Dong
Tian, Zhiqiang
Du, Shaoyi
Qu, Yanyun
description Purpose Image‐based breast lesion detection is a powerful clinical diagnosis technology. In recent years, deep learning architectures have achieved considerable success in medical image analysis however, they always require large‐scale samples. In mammography images, breast lesions are inconspicuous, multiscale, and have blurred edges. Moreover, few well‐labeled images exist. Because of these factors, the detection accuracy of conventional deep learning methods is low. Therefore, we attempted to improve the accuracy of mammary lesion detection by introducing transfer learning (TL) into a deep learning framework for the few‐shot learning task and thus provide a method that will further assist physicians in detecting breast lesions. Methods In this paper, we propose a method called “few‐shot learning with deformable convolution for multiscale lesion detection in mammography,” named FDMNet. Deformable convolution is introduced for enhancing the network’s ability to detect lesions, and the sensitivity of the multiscale feature space is reinforced by using a feature pyramid method. Furthermore, by introducing location information in the predictor, the sensitivity of the model to lesion location is also enhanced. The proposed method, through the TL technique that is applied mines the potentially common knowledge of features in the source domain and transfers it into the target domain to improve the accuracy of breast lesion detection in the few‐shot learning task. Results On the publicly available datasets for screening mammography CBIS‐DDSM and Mini‐MIAS, the proposed method performs better than five widely used detection methods. On the CBIS‐DDSM dataset, its comprehensive scores, sensitivity, precision, and the mean dice similarity coefficient are 0.911, 0.949, 0.873, and 0.913, respectively, and on the Mini‐MIAS dataset, these values are 0.931, 0.966, 0.882, and 0.941, respectively. Conclusions To achieve the few‐shot learning required for medical image analysis, the proposed method uses TL to execute feature knowledge transformation and includes deformable convolution to build a feature pyramid structure, which enhances the learning performance of the network for lesions. The results of comparative numerical experiments show that the proposed method outperforms some state‐of‐the‐art methods.
doi_str_mv 10.1002/mp.14129
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2376729985</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2376729985</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3219-2dff0d0c2f2d2f1c08a645d26024c2ab205c35b82aeab55128b4a73ad49f5f003</originalsourceid><addsrcrecordid>eNp1kM1OwzAQhC0EoqUg8QQoRy4p642dNEdUUUAqggNciRzHboPsOMQJVW88As_Ik5C2_Jy47Eozn0ajIeSUwpgC4IWtx5RRTPfIEFkShQwh3SdDgJSFyIAPyJH3LwAQRxwOySBCGkN_huR5plaf7x9-6drAKNFUZbUIVmW7DAqlXWNFblQgXfXmTNeWrgp6MbCdaUsvRW8Z5TdqoVolt35ZBVZY6xaNqJfrY3KghfHq5PuPyNPs6nF6E87vr2-nl_NQ9i3SEAutoQCJGgvUVMJExIwXGAMyiSJH4DLi-QSFEjnnFCc5E0kkCpZqrgGiETnf5daNe-2UbzPbF1TGiEq5zmcYJXGCaTrhf6hsnPeN0lndlFY064xCtlkzs3W2XbNHz75Tu9yq4hf8ma8Hwh2wKo1a_xuU3T3sAr8A5At_uA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2376729985</pqid></control><display><type>article</type><title>Few‐shot learning with deformable convolution for multiscale lesion detection in mammography</title><source>Wiley</source><creator>Li, Ce ; Zhang, Dong ; Tian, Zhiqiang ; Du, Shaoyi ; Qu, Yanyun</creator><creatorcontrib>Li, Ce ; Zhang, Dong ; Tian, Zhiqiang ; Du, Shaoyi ; Qu, Yanyun</creatorcontrib><description>Purpose Image‐based breast lesion detection is a powerful clinical diagnosis technology. In recent years, deep learning architectures have achieved considerable success in medical image analysis however, they always require large‐scale samples. In mammography images, breast lesions are inconspicuous, multiscale, and have blurred edges. Moreover, few well‐labeled images exist. Because of these factors, the detection accuracy of conventional deep learning methods is low. Therefore, we attempted to improve the accuracy of mammary lesion detection by introducing transfer learning (TL) into a deep learning framework for the few‐shot learning task and thus provide a method that will further assist physicians in detecting breast lesions. Methods In this paper, we propose a method called “few‐shot learning with deformable convolution for multiscale lesion detection in mammography,” named FDMNet. Deformable convolution is introduced for enhancing the network’s ability to detect lesions, and the sensitivity of the multiscale feature space is reinforced by using a feature pyramid method. Furthermore, by introducing location information in the predictor, the sensitivity of the model to lesion location is also enhanced. The proposed method, through the TL technique that is applied mines the potentially common knowledge of features in the source domain and transfers it into the target domain to improve the accuracy of breast lesion detection in the few‐shot learning task. Results On the publicly available datasets for screening mammography CBIS‐DDSM and Mini‐MIAS, the proposed method performs better than five widely used detection methods. On the CBIS‐DDSM dataset, its comprehensive scores, sensitivity, precision, and the mean dice similarity coefficient are 0.911, 0.949, 0.873, and 0.913, respectively, and on the Mini‐MIAS dataset, these values are 0.931, 0.966, 0.882, and 0.941, respectively. Conclusions To achieve the few‐shot learning required for medical image analysis, the proposed method uses TL to execute feature knowledge transformation and includes deformable convolution to build a feature pyramid structure, which enhances the learning performance of the network for lesions. The results of comparative numerical experiments show that the proposed method outperforms some state‐of‐the‐art methods.</description><identifier>ISSN: 0094-2405</identifier><identifier>EISSN: 2473-4209</identifier><identifier>DOI: 10.1002/mp.14129</identifier><identifier>PMID: 32160321</identifier><language>eng</language><publisher>United States</publisher><subject>attention factor ; Breast - diagnostic imaging ; Breast Neoplasms - diagnostic imaging ; deformable convolution ; Early Detection of Cancer ; few‐shot learning ; Humans ; lesion detection ; Mammography ; multiscale feature</subject><ispartof>Medical physics (Lancaster), 2020-07, Vol.47 (7), p.2970-2985</ispartof><rights>2020 American Association of Physicists in Medicine</rights><rights>2020 American Association of Physicists in Medicine.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3219-2dff0d0c2f2d2f1c08a645d26024c2ab205c35b82aeab55128b4a73ad49f5f003</citedby><cites>FETCH-LOGICAL-c3219-2dff0d0c2f2d2f1c08a645d26024c2ab205c35b82aeab55128b4a73ad49f5f003</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32160321$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Ce</creatorcontrib><creatorcontrib>Zhang, Dong</creatorcontrib><creatorcontrib>Tian, Zhiqiang</creatorcontrib><creatorcontrib>Du, Shaoyi</creatorcontrib><creatorcontrib>Qu, Yanyun</creatorcontrib><title>Few‐shot learning with deformable convolution for multiscale lesion detection in mammography</title><title>Medical physics (Lancaster)</title><addtitle>Med Phys</addtitle><description>Purpose Image‐based breast lesion detection is a powerful clinical diagnosis technology. In recent years, deep learning architectures have achieved considerable success in medical image analysis however, they always require large‐scale samples. In mammography images, breast lesions are inconspicuous, multiscale, and have blurred edges. Moreover, few well‐labeled images exist. Because of these factors, the detection accuracy of conventional deep learning methods is low. Therefore, we attempted to improve the accuracy of mammary lesion detection by introducing transfer learning (TL) into a deep learning framework for the few‐shot learning task and thus provide a method that will further assist physicians in detecting breast lesions. Methods In this paper, we propose a method called “few‐shot learning with deformable convolution for multiscale lesion detection in mammography,” named FDMNet. Deformable convolution is introduced for enhancing the network’s ability to detect lesions, and the sensitivity of the multiscale feature space is reinforced by using a feature pyramid method. Furthermore, by introducing location information in the predictor, the sensitivity of the model to lesion location is also enhanced. The proposed method, through the TL technique that is applied mines the potentially common knowledge of features in the source domain and transfers it into the target domain to improve the accuracy of breast lesion detection in the few‐shot learning task. Results On the publicly available datasets for screening mammography CBIS‐DDSM and Mini‐MIAS, the proposed method performs better than five widely used detection methods. On the CBIS‐DDSM dataset, its comprehensive scores, sensitivity, precision, and the mean dice similarity coefficient are 0.911, 0.949, 0.873, and 0.913, respectively, and on the Mini‐MIAS dataset, these values are 0.931, 0.966, 0.882, and 0.941, respectively. Conclusions To achieve the few‐shot learning required for medical image analysis, the proposed method uses TL to execute feature knowledge transformation and includes deformable convolution to build a feature pyramid structure, which enhances the learning performance of the network for lesions. The results of comparative numerical experiments show that the proposed method outperforms some state‐of‐the‐art methods.</description><subject>attention factor</subject><subject>Breast - diagnostic imaging</subject><subject>Breast Neoplasms - diagnostic imaging</subject><subject>deformable convolution</subject><subject>Early Detection of Cancer</subject><subject>few‐shot learning</subject><subject>Humans</subject><subject>lesion detection</subject><subject>Mammography</subject><subject>multiscale feature</subject><issn>0094-2405</issn><issn>2473-4209</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp1kM1OwzAQhC0EoqUg8QQoRy4p642dNEdUUUAqggNciRzHboPsOMQJVW88As_Ik5C2_Jy47Eozn0ajIeSUwpgC4IWtx5RRTPfIEFkShQwh3SdDgJSFyIAPyJH3LwAQRxwOySBCGkN_huR5plaf7x9-6drAKNFUZbUIVmW7DAqlXWNFblQgXfXmTNeWrgp6MbCdaUsvRW8Z5TdqoVolt35ZBVZY6xaNqJfrY3KghfHq5PuPyNPs6nF6E87vr2-nl_NQ9i3SEAutoQCJGgvUVMJExIwXGAMyiSJH4DLi-QSFEjnnFCc5E0kkCpZqrgGiETnf5daNe-2UbzPbF1TGiEq5zmcYJXGCaTrhf6hsnPeN0lndlFY064xCtlkzs3W2XbNHz75Tu9yq4hf8ma8Hwh2wKo1a_xuU3T3sAr8A5At_uA</recordid><startdate>202007</startdate><enddate>202007</enddate><creator>Li, Ce</creator><creator>Zhang, Dong</creator><creator>Tian, Zhiqiang</creator><creator>Du, Shaoyi</creator><creator>Qu, Yanyun</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202007</creationdate><title>Few‐shot learning with deformable convolution for multiscale lesion detection in mammography</title><author>Li, Ce ; Zhang, Dong ; Tian, Zhiqiang ; Du, Shaoyi ; Qu, Yanyun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3219-2dff0d0c2f2d2f1c08a645d26024c2ab205c35b82aeab55128b4a73ad49f5f003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>attention factor</topic><topic>Breast - diagnostic imaging</topic><topic>Breast Neoplasms - diagnostic imaging</topic><topic>deformable convolution</topic><topic>Early Detection of Cancer</topic><topic>few‐shot learning</topic><topic>Humans</topic><topic>lesion detection</topic><topic>Mammography</topic><topic>multiscale feature</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Ce</creatorcontrib><creatorcontrib>Zhang, Dong</creatorcontrib><creatorcontrib>Tian, Zhiqiang</creatorcontrib><creatorcontrib>Du, Shaoyi</creatorcontrib><creatorcontrib>Qu, Yanyun</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Medical physics (Lancaster)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Ce</au><au>Zhang, Dong</au><au>Tian, Zhiqiang</au><au>Du, Shaoyi</au><au>Qu, Yanyun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Few‐shot learning with deformable convolution for multiscale lesion detection in mammography</atitle><jtitle>Medical physics (Lancaster)</jtitle><addtitle>Med Phys</addtitle><date>2020-07</date><risdate>2020</risdate><volume>47</volume><issue>7</issue><spage>2970</spage><epage>2985</epage><pages>2970-2985</pages><issn>0094-2405</issn><eissn>2473-4209</eissn><abstract>Purpose Image‐based breast lesion detection is a powerful clinical diagnosis technology. In recent years, deep learning architectures have achieved considerable success in medical image analysis however, they always require large‐scale samples. In mammography images, breast lesions are inconspicuous, multiscale, and have blurred edges. Moreover, few well‐labeled images exist. Because of these factors, the detection accuracy of conventional deep learning methods is low. Therefore, we attempted to improve the accuracy of mammary lesion detection by introducing transfer learning (TL) into a deep learning framework for the few‐shot learning task and thus provide a method that will further assist physicians in detecting breast lesions. Methods In this paper, we propose a method called “few‐shot learning with deformable convolution for multiscale lesion detection in mammography,” named FDMNet. Deformable convolution is introduced for enhancing the network’s ability to detect lesions, and the sensitivity of the multiscale feature space is reinforced by using a feature pyramid method. Furthermore, by introducing location information in the predictor, the sensitivity of the model to lesion location is also enhanced. The proposed method, through the TL technique that is applied mines the potentially common knowledge of features in the source domain and transfers it into the target domain to improve the accuracy of breast lesion detection in the few‐shot learning task. Results On the publicly available datasets for screening mammography CBIS‐DDSM and Mini‐MIAS, the proposed method performs better than five widely used detection methods. On the CBIS‐DDSM dataset, its comprehensive scores, sensitivity, precision, and the mean dice similarity coefficient are 0.911, 0.949, 0.873, and 0.913, respectively, and on the Mini‐MIAS dataset, these values are 0.931, 0.966, 0.882, and 0.941, respectively. Conclusions To achieve the few‐shot learning required for medical image analysis, the proposed method uses TL to execute feature knowledge transformation and includes deformable convolution to build a feature pyramid structure, which enhances the learning performance of the network for lesions. The results of comparative numerical experiments show that the proposed method outperforms some state‐of‐the‐art methods.</abstract><cop>United States</cop><pmid>32160321</pmid><doi>10.1002/mp.14129</doi><tpages>16</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0094-2405
ispartof Medical physics (Lancaster), 2020-07, Vol.47 (7), p.2970-2985
issn 0094-2405
2473-4209
language eng
recordid cdi_proquest_miscellaneous_2376729985
source Wiley
subjects attention factor
Breast - diagnostic imaging
Breast Neoplasms - diagnostic imaging
deformable convolution
Early Detection of Cancer
few‐shot learning
Humans
lesion detection
Mammography
multiscale feature
title Few‐shot learning with deformable convolution for multiscale lesion detection in mammography
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T13%3A19%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Few%E2%80%90shot%20learning%20with%20deformable%20convolution%20for%20multiscale%20lesion%20detection%20in%20mammography&rft.jtitle=Medical%20physics%20(Lancaster)&rft.au=Li,%20Ce&rft.date=2020-07&rft.volume=47&rft.issue=7&rft.spage=2970&rft.epage=2985&rft.pages=2970-2985&rft.issn=0094-2405&rft.eissn=2473-4209&rft_id=info:doi/10.1002/mp.14129&rft_dat=%3Cproquest_cross%3E2376729985%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3219-2dff0d0c2f2d2f1c08a645d26024c2ab205c35b82aeab55128b4a73ad49f5f003%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2376729985&rft_id=info:pmid/32160321&rfr_iscdi=true