Loading…

How to Extract More Information With Less Burden: Fundus Image Classification and Retinal Disease Localization With Ophthalmologist Intervention

Image classification using convolutional neural networks (CNNs) outperforms other state-of-the-art methods. Moreover, attention can be visualized as a heatmap to improve the explainability of results of a CNN. We designed a framework that can generate heatmaps reflecting lesion regions precisely. We...

Full description

Saved in:
Bibliographic Details
Published in:IEEE journal of biomedical and health informatics 2020-12, Vol.24 (12), p.3351-3361
Main Authors: Meng, Qier, Hashimoto, Yohei, Satoh, Shin'ichi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c349t-4722c6932751be283d87306a9dd99ddecb565be0b138d354cd2beb03d83b5efc3
cites cdi_FETCH-LOGICAL-c349t-4722c6932751be283d87306a9dd99ddecb565be0b138d354cd2beb03d83b5efc3
container_end_page 3361
container_issue 12
container_start_page 3351
container_title IEEE journal of biomedical and health informatics
container_volume 24
creator Meng, Qier
Hashimoto, Yohei
Satoh, Shin'ichi
description Image classification using convolutional neural networks (CNNs) outperforms other state-of-the-art methods. Moreover, attention can be visualized as a heatmap to improve the explainability of results of a CNN. We designed a framework that can generate heatmaps reflecting lesion regions precisely. We generated initial heatmaps by using a gradient-based classification activation map (Grad-CAM). We assume that these Grad-CAM heatmaps correctly reveal the lesion regions; then we apply the attention mining technique to these heatmaps to obtain integrated heatmaps. Moreover, we assume that these Grad-CAM heatmaps incorrectly reveal the lesion regions and design a dissimilarity loss to increase their discrepancy with the Grad-CAM heatmaps. In this study, we found that having professional ophthalmologists select 30% of the heatmaps covering the lesion regions led to better results, because this step integrates (prior) clinical knowledge into the system. Furthermore, we design a knowledge preservation loss that minimizes the discrepancy between heatmaps generated from the updated CNN model and the selected heatmaps. Experiments using fundus images revealed that our method improved classification accuracy and generated attention regions closer to the ground truth lesion regions in comparison with existing methods.
doi_str_mv 10.1109/JBHI.2020.3011805
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JBHI_2020_3011805</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9147001</ieee_id><sourcerecordid>2468750858</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-4722c6932751be283d87306a9dd99ddecb565be0b138d354cd2beb03d83b5efc3</originalsourceid><addsrcrecordid>eNpdkV1LHDEUhkOxVFF_QCmUgDe92TUf85H0rrtqd8sWoSi9HDLJGTcyM1mTTP34Ff7kZthVxEBISJ7zhpMHoc-UTCkl8vTXbLGcMsLIlBNKBck_oANGCzFhjIi9lz2V2T46DuGWpCHSkSw-oX3OypzIkhyg54W7x9Hh84folY74t_OAl33jfKeidT3-a-MaryAEPBu8gf47vhh6MwS87NQN4HmrQrCN1Vta9Qb_gWh71eIzG0AFwCunVWuf3sRdbtZxrdrOte7Ghpjei-D_QT8SR-hjo9oAx7v1EF1fnF_NF5PV5c_l_Mdqonkm4yQrGdOFHBuhNTDBjSg5KZQ0RqYJus6LvAZSUy4MzzNtWA01SRivc2g0P0Tftrkb7-4GCLHqbNDQtqoHN4SKZSmuyBijCT15h966wacWR6oQ6StFLhJFt5T2LgQPTbXxtlP-saKkGo1Vo7FqNFbtjKWar7vkoe7AvFa8-EnAly1gAeD1WtKsJITy_4Afm0M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2468750858</pqid></control><display><type>article</type><title>How to Extract More Information With Less Burden: Fundus Image Classification and Retinal Disease Localization With Ophthalmologist Intervention</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Meng, Qier ; Hashimoto, Yohei ; Satoh, Shin'ichi</creator><creatorcontrib>Meng, Qier ; Hashimoto, Yohei ; Satoh, Shin'ichi</creatorcontrib><description>Image classification using convolutional neural networks (CNNs) outperforms other state-of-the-art methods. Moreover, attention can be visualized as a heatmap to improve the explainability of results of a CNN. We designed a framework that can generate heatmaps reflecting lesion regions precisely. We generated initial heatmaps by using a gradient-based classification activation map (Grad-CAM). We assume that these Grad-CAM heatmaps correctly reveal the lesion regions; then we apply the attention mining technique to these heatmaps to obtain integrated heatmaps. Moreover, we assume that these Grad-CAM heatmaps incorrectly reveal the lesion regions and design a dissimilarity loss to increase their discrepancy with the Grad-CAM heatmaps. In this study, we found that having professional ophthalmologists select 30% of the heatmaps covering the lesion regions led to better results, because this step integrates (prior) clinical knowledge into the system. Furthermore, we design a knowledge preservation loss that minimizes the discrepancy between heatmaps generated from the updated CNN model and the selected heatmaps. Experiments using fundus images revealed that our method improved classification accuracy and generated attention regions closer to the ground truth lesion regions in comparison with existing methods.</description><identifier>ISSN: 2168-2194</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2020.3011805</identifier><identifier>PMID: 32750970</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Artificial neural networks ; attention mining ; Bioinformatics ; Blindness ; Classification ; Diseases ; dissimilarity ; Fundus Oculi ; grad-CAM ; Ground truth ; Heating systems ; Humans ; Image classification ; Image Interpretation, Computer-Assisted - methods ; Information processing ; Knowledge ; knowledge preservation ; Lesion localization ; Lesions ; Localization ; Neural networks ; Neural Networks, Computer ; Ophthalmologists ; Retina ; Retina - diagnostic imaging ; Retinal Diseases - diagnostic imaging ; Visualization</subject><ispartof>IEEE journal of biomedical and health informatics, 2020-12, Vol.24 (12), p.3351-3361</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-4722c6932751be283d87306a9dd99ddecb565be0b138d354cd2beb03d83b5efc3</citedby><cites>FETCH-LOGICAL-c349t-4722c6932751be283d87306a9dd99ddecb565be0b138d354cd2beb03d83b5efc3</cites><orcidid>0000-0002-0697-942X ; 0000-0001-6995-6447</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9147001$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,54774</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32750970$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Meng, Qier</creatorcontrib><creatorcontrib>Hashimoto, Yohei</creatorcontrib><creatorcontrib>Satoh, Shin'ichi</creatorcontrib><title>How to Extract More Information With Less Burden: Fundus Image Classification and Retinal Disease Localization With Ophthalmologist Intervention</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description>Image classification using convolutional neural networks (CNNs) outperforms other state-of-the-art methods. Moreover, attention can be visualized as a heatmap to improve the explainability of results of a CNN. We designed a framework that can generate heatmaps reflecting lesion regions precisely. We generated initial heatmaps by using a gradient-based classification activation map (Grad-CAM). We assume that these Grad-CAM heatmaps correctly reveal the lesion regions; then we apply the attention mining technique to these heatmaps to obtain integrated heatmaps. Moreover, we assume that these Grad-CAM heatmaps incorrectly reveal the lesion regions and design a dissimilarity loss to increase their discrepancy with the Grad-CAM heatmaps. In this study, we found that having professional ophthalmologists select 30% of the heatmaps covering the lesion regions led to better results, because this step integrates (prior) clinical knowledge into the system. Furthermore, we design a knowledge preservation loss that minimizes the discrepancy between heatmaps generated from the updated CNN model and the selected heatmaps. Experiments using fundus images revealed that our method improved classification accuracy and generated attention regions closer to the ground truth lesion regions in comparison with existing methods.</description><subject>Artificial neural networks</subject><subject>attention mining</subject><subject>Bioinformatics</subject><subject>Blindness</subject><subject>Classification</subject><subject>Diseases</subject><subject>dissimilarity</subject><subject>Fundus Oculi</subject><subject>grad-CAM</subject><subject>Ground truth</subject><subject>Heating systems</subject><subject>Humans</subject><subject>Image classification</subject><subject>Image Interpretation, Computer-Assisted - methods</subject><subject>Information processing</subject><subject>Knowledge</subject><subject>knowledge preservation</subject><subject>Lesion localization</subject><subject>Lesions</subject><subject>Localization</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Ophthalmologists</subject><subject>Retina</subject><subject>Retina - diagnostic imaging</subject><subject>Retinal Diseases - diagnostic imaging</subject><subject>Visualization</subject><issn>2168-2194</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNpdkV1LHDEUhkOxVFF_QCmUgDe92TUf85H0rrtqd8sWoSi9HDLJGTcyM1mTTP34Ff7kZthVxEBISJ7zhpMHoc-UTCkl8vTXbLGcMsLIlBNKBck_oANGCzFhjIi9lz2V2T46DuGWpCHSkSw-oX3OypzIkhyg54W7x9Hh84folY74t_OAl33jfKeidT3-a-MaryAEPBu8gf47vhh6MwS87NQN4HmrQrCN1Vta9Qb_gWh71eIzG0AFwCunVWuf3sRdbtZxrdrOte7Ghpjei-D_QT8SR-hjo9oAx7v1EF1fnF_NF5PV5c_l_Mdqonkm4yQrGdOFHBuhNTDBjSg5KZQ0RqYJus6LvAZSUy4MzzNtWA01SRivc2g0P0Tftrkb7-4GCLHqbNDQtqoHN4SKZSmuyBijCT15h966wacWR6oQ6StFLhJFt5T2LgQPTbXxtlP-saKkGo1Vo7FqNFbtjKWar7vkoe7AvFa8-EnAly1gAeD1WtKsJITy_4Afm0M</recordid><startdate>20201201</startdate><enddate>20201201</enddate><creator>Meng, Qier</creator><creator>Hashimoto, Yohei</creator><creator>Satoh, Shin'ichi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-0697-942X</orcidid><orcidid>https://orcid.org/0000-0001-6995-6447</orcidid></search><sort><creationdate>20201201</creationdate><title>How to Extract More Information With Less Burden: Fundus Image Classification and Retinal Disease Localization With Ophthalmologist Intervention</title><author>Meng, Qier ; Hashimoto, Yohei ; Satoh, Shin'ichi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-4722c6932751be283d87306a9dd99ddecb565be0b138d354cd2beb03d83b5efc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>attention mining</topic><topic>Bioinformatics</topic><topic>Blindness</topic><topic>Classification</topic><topic>Diseases</topic><topic>dissimilarity</topic><topic>Fundus Oculi</topic><topic>grad-CAM</topic><topic>Ground truth</topic><topic>Heating systems</topic><topic>Humans</topic><topic>Image classification</topic><topic>Image Interpretation, Computer-Assisted - methods</topic><topic>Information processing</topic><topic>Knowledge</topic><topic>knowledge preservation</topic><topic>Lesion localization</topic><topic>Lesions</topic><topic>Localization</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Ophthalmologists</topic><topic>Retina</topic><topic>Retina - diagnostic imaging</topic><topic>Retinal Diseases - diagnostic imaging</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Meng, Qier</creatorcontrib><creatorcontrib>Hashimoto, Yohei</creatorcontrib><creatorcontrib>Satoh, Shin'ichi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Meng, Qier</au><au>Hashimoto, Yohei</au><au>Satoh, Shin'ichi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>How to Extract More Information With Less Burden: Fundus Image Classification and Retinal Disease Localization With Ophthalmologist Intervention</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2020-12-01</date><risdate>2020</risdate><volume>24</volume><issue>12</issue><spage>3351</spage><epage>3361</epage><pages>3351-3361</pages><issn>2168-2194</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract>Image classification using convolutional neural networks (CNNs) outperforms other state-of-the-art methods. Moreover, attention can be visualized as a heatmap to improve the explainability of results of a CNN. We designed a framework that can generate heatmaps reflecting lesion regions precisely. We generated initial heatmaps by using a gradient-based classification activation map (Grad-CAM). We assume that these Grad-CAM heatmaps correctly reveal the lesion regions; then we apply the attention mining technique to these heatmaps to obtain integrated heatmaps. Moreover, we assume that these Grad-CAM heatmaps incorrectly reveal the lesion regions and design a dissimilarity loss to increase their discrepancy with the Grad-CAM heatmaps. In this study, we found that having professional ophthalmologists select 30% of the heatmaps covering the lesion regions led to better results, because this step integrates (prior) clinical knowledge into the system. Furthermore, we design a knowledge preservation loss that minimizes the discrepancy between heatmaps generated from the updated CNN model and the selected heatmaps. Experiments using fundus images revealed that our method improved classification accuracy and generated attention regions closer to the ground truth lesion regions in comparison with existing methods.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>32750970</pmid><doi>10.1109/JBHI.2020.3011805</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-0697-942X</orcidid><orcidid>https://orcid.org/0000-0001-6995-6447</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2168-2194
ispartof IEEE journal of biomedical and health informatics, 2020-12, Vol.24 (12), p.3351-3361
issn 2168-2194
2168-2208
language eng
recordid cdi_crossref_primary_10_1109_JBHI_2020_3011805
source IEEE Electronic Library (IEL) Journals
subjects Artificial neural networks
attention mining
Bioinformatics
Blindness
Classification
Diseases
dissimilarity
Fundus Oculi
grad-CAM
Ground truth
Heating systems
Humans
Image classification
Image Interpretation, Computer-Assisted - methods
Information processing
Knowledge
knowledge preservation
Lesion localization
Lesions
Localization
Neural networks
Neural Networks, Computer
Ophthalmologists
Retina
Retina - diagnostic imaging
Retinal Diseases - diagnostic imaging
Visualization
title How to Extract More Information With Less Burden: Fundus Image Classification and Retinal Disease Localization With Ophthalmologist Intervention
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T23%3A07%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=How%20to%20Extract%20More%20Information%20With%20Less%20Burden:%20Fundus%20Image%20Classification%20and%20Retinal%20Disease%20Localization%20With%20Ophthalmologist%20Intervention&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Meng,%20Qier&rft.date=2020-12-01&rft.volume=24&rft.issue=12&rft.spage=3351&rft.epage=3361&rft.pages=3351-3361&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2020.3011805&rft_dat=%3Cproquest_cross%3E2468750858%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c349t-4722c6932751be283d87306a9dd99ddecb565be0b138d354cd2beb03d83b5efc3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2468750858&rft_id=info:pmid/32750970&rft_ieee_id=9147001&rfr_iscdi=true