Loading…
Color fundus image registration using a learning-based domain-specific landmark detection methodology
Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with...
Saved in:
Published in: | Computers in biology and medicine 2022-01, Vol.140, p.105101-105101, Article 105101 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c452t-65892eefb0c530f8f421cb93fd44ecef9e32f6abfc9c4ebafb49668860fcd4723 |
---|---|
cites | cdi_FETCH-LOGICAL-c452t-65892eefb0c530f8f421cb93fd44ecef9e32f6abfc9c4ebafb49668860fcd4723 |
container_end_page | 105101 |
container_issue | |
container_start_page | 105101 |
container_title | Computers in biology and medicine |
container_volume | 140 |
creator | Rivas-Villar, David Hervella, Álvaro S. Rouco, José Novo, Jorge |
description | Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
•The proposed automatic method allows to accurately register retinal images.•A deep neural network detects highly specific domain-related landmarks.•The landmarks can be matched without descriptors using a RANSAC-based method.•Using deep learning landmarks instead of classical ones improves the registration.•The proposal outperforms state-of-the-art deep learning methods. |
doi_str_mv | 10.1016/j.compbiomed.2021.105101 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2608132147</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0010482521008957</els_id><sourcerecordid>2608132147</sourcerecordid><originalsourceid>FETCH-LOGICAL-c452t-65892eefb0c530f8f421cb93fd44ecef9e32f6abfc9c4ebafb49668860fcd4723</originalsourceid><addsrcrecordid>eNqFkU9r3DAQxUVpSDZ_vkIQ9NKLNyNZtuVju7RJINBLcxayNNpoa1tbyS7st4_czVLoJScNmt-bGd4jhDJYM2D13W5twrDvfBjQrjlwlr-r3PlAVkw2bQFVKT6SFQCDQkheXZDLlHYAIKCEc3JRCtlUgvEVwU3oQ6RuHu2cqB_0FmnErU9T1JMPI52TH7dU0x51HHNZdDqhpTYM2o9F2qPxzhva69EOOv6iFic0f5UDTi_B5vHbwzU5c7pPePP2XpHn799-bh6Kpx_3j5svT4URFZ-KupItR3QdmKoEJ53gzHRt6awQaNC1WHJX686Z1gjstOtEW9dS1uCMFQ0vr8jn49x9DL9nTJMafDLY5-swzEnxGiQrORNNRj_9h-7CHMd8XaZYJZpW1gslj5SJIaWITu1jNikeFAO1RKF26l8UaolCHaPI0tu3BXO39E7Ck_cZ-HoEMDvyx2NUyXgcDVofs4XKBv_-llc776E8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2615479867</pqid></control><display><type>article</type><title>Color fundus image registration using a learning-based domain-specific landmark detection methodology</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Rivas-Villar, David ; Hervella, Álvaro S. ; Rouco, José ; Novo, Jorge</creator><creatorcontrib>Rivas-Villar, David ; Hervella, Álvaro S. ; Rouco, José ; Novo, Jorge</creatorcontrib><description>Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
•The proposed automatic method allows to accurately register retinal images.•A deep neural network detects highly specific domain-related landmarks.•The landmarks can be matched without descriptors using a RANSAC-based method.•Using deep learning landmarks instead of classical ones improves the registration.•The proposal outperforms state-of-the-art deep learning methods.</description><identifier>ISSN: 0010-4825</identifier><identifier>EISSN: 1879-0534</identifier><identifier>DOI: 10.1016/j.compbiomed.2021.105101</identifier><identifier>PMID: 34875412</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Algorithms ; Automation ; Bifurcations ; Blood vessels ; Clinical medicine ; Color ; Color fundus images ; Color vision ; Datasets ; Deep learning ; Diabetes mellitus ; Diabetic retinopathy ; Eye ; Hypertension ; Image processing ; Image registration ; Medical image registration ; Medical imaging ; Neural networks ; Registration ; Retina ; Retinal images ; Teaching methods</subject><ispartof>Computers in biology and medicine, 2022-01, Vol.140, p.105101-105101, Article 105101</ispartof><rights>2021 The Authors</rights><rights>Copyright © 2021 The Authors. Published by Elsevier Ltd.. All rights reserved.</rights><rights>2021. The Authors</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c452t-65892eefb0c530f8f421cb93fd44ecef9e32f6abfc9c4ebafb49668860fcd4723</citedby><cites>FETCH-LOGICAL-c452t-65892eefb0c530f8f421cb93fd44ecef9e32f6abfc9c4ebafb49668860fcd4723</cites><orcidid>0000-0001-7824-8098</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34875412$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Rivas-Villar, David</creatorcontrib><creatorcontrib>Hervella, Álvaro S.</creatorcontrib><creatorcontrib>Rouco, José</creatorcontrib><creatorcontrib>Novo, Jorge</creatorcontrib><title>Color fundus image registration using a learning-based domain-specific landmark detection methodology</title><title>Computers in biology and medicine</title><addtitle>Comput Biol Med</addtitle><description>Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
•The proposed automatic method allows to accurately register retinal images.•A deep neural network detects highly specific domain-related landmarks.•The landmarks can be matched without descriptors using a RANSAC-based method.•Using deep learning landmarks instead of classical ones improves the registration.•The proposal outperforms state-of-the-art deep learning methods.</description><subject>Algorithms</subject><subject>Automation</subject><subject>Bifurcations</subject><subject>Blood vessels</subject><subject>Clinical medicine</subject><subject>Color</subject><subject>Color fundus images</subject><subject>Color vision</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Diabetes mellitus</subject><subject>Diabetic retinopathy</subject><subject>Eye</subject><subject>Hypertension</subject><subject>Image processing</subject><subject>Image registration</subject><subject>Medical image registration</subject><subject>Medical imaging</subject><subject>Neural networks</subject><subject>Registration</subject><subject>Retina</subject><subject>Retinal images</subject><subject>Teaching methods</subject><issn>0010-4825</issn><issn>1879-0534</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNqFkU9r3DAQxUVpSDZ_vkIQ9NKLNyNZtuVju7RJINBLcxayNNpoa1tbyS7st4_czVLoJScNmt-bGd4jhDJYM2D13W5twrDvfBjQrjlwlr-r3PlAVkw2bQFVKT6SFQCDQkheXZDLlHYAIKCEc3JRCtlUgvEVwU3oQ6RuHu2cqB_0FmnErU9T1JMPI52TH7dU0x51HHNZdDqhpTYM2o9F2qPxzhva69EOOv6iFic0f5UDTi_B5vHbwzU5c7pPePP2XpHn799-bh6Kpx_3j5svT4URFZ-KupItR3QdmKoEJ53gzHRt6awQaNC1WHJX686Z1gjstOtEW9dS1uCMFQ0vr8jn49x9DL9nTJMafDLY5-swzEnxGiQrORNNRj_9h-7CHMd8XaZYJZpW1gslj5SJIaWITu1jNikeFAO1RKF26l8UaolCHaPI0tu3BXO39E7Ck_cZ-HoEMDvyx2NUyXgcDVofs4XKBv_-llc776E8</recordid><startdate>202201</startdate><enddate>202201</enddate><creator>Rivas-Villar, David</creator><creator>Hervella, Álvaro S.</creator><creator>Rouco, José</creator><creator>Novo, Jorge</creator><general>Elsevier Ltd</general><general>Elsevier Limited</general><scope>6I.</scope><scope>AAFTH</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7RV</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>KB0</scope><scope>LK8</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M2O</scope><scope>M7P</scope><scope>M7Z</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-7824-8098</orcidid></search><sort><creationdate>202201</creationdate><title>Color fundus image registration using a learning-based domain-specific landmark detection methodology</title><author>Rivas-Villar, David ; Hervella, Álvaro S. ; Rouco, José ; Novo, Jorge</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c452t-65892eefb0c530f8f421cb93fd44ecef9e32f6abfc9c4ebafb49668860fcd4723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Automation</topic><topic>Bifurcations</topic><topic>Blood vessels</topic><topic>Clinical medicine</topic><topic>Color</topic><topic>Color fundus images</topic><topic>Color vision</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Diabetes mellitus</topic><topic>Diabetic retinopathy</topic><topic>Eye</topic><topic>Hypertension</topic><topic>Image processing</topic><topic>Image registration</topic><topic>Medical image registration</topic><topic>Medical imaging</topic><topic>Neural networks</topic><topic>Registration</topic><topic>Retina</topic><topic>Retinal images</topic><topic>Teaching methods</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rivas-Villar, David</creatorcontrib><creatorcontrib>Hervella, Álvaro S.</creatorcontrib><creatorcontrib>Rouco, José</creatorcontrib><creatorcontrib>Novo, Jorge</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Nursing and Allied Health Journals</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>ProQuest Biological Science Collection</collection><collection>Computing Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>ProQuest research library</collection><collection>Biological Science Database</collection><collection>Biochemistry Abstracts 1</collection><collection>Research Library (Corporate)</collection><collection>Nursing & Allied Health Premium</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>Computers in biology and medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rivas-Villar, David</au><au>Hervella, Álvaro S.</au><au>Rouco, José</au><au>Novo, Jorge</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Color fundus image registration using a learning-based domain-specific landmark detection methodology</atitle><jtitle>Computers in biology and medicine</jtitle><addtitle>Comput Biol Med</addtitle><date>2022-01</date><risdate>2022</risdate><volume>140</volume><spage>105101</spage><epage>105101</epage><pages>105101-105101</pages><artnum>105101</artnum><issn>0010-4825</issn><eissn>1879-0534</eissn><abstract>Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
•The proposed automatic method allows to accurately register retinal images.•A deep neural network detects highly specific domain-related landmarks.•The landmarks can be matched without descriptors using a RANSAC-based method.•Using deep learning landmarks instead of classical ones improves the registration.•The proposal outperforms state-of-the-art deep learning methods.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>34875412</pmid><doi>10.1016/j.compbiomed.2021.105101</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-7824-8098</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0010-4825 |
ispartof | Computers in biology and medicine, 2022-01, Vol.140, p.105101-105101, Article 105101 |
issn | 0010-4825 1879-0534 |
language | eng |
recordid | cdi_proquest_miscellaneous_2608132147 |
source | ScienceDirect Freedom Collection 2022-2024 |
subjects | Algorithms Automation Bifurcations Blood vessels Clinical medicine Color Color fundus images Color vision Datasets Deep learning Diabetes mellitus Diabetic retinopathy Eye Hypertension Image processing Image registration Medical image registration Medical imaging Neural networks Registration Retina Retinal images Teaching methods |
title | Color fundus image registration using a learning-based domain-specific landmark detection methodology |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T06%3A25%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Color%20fundus%20image%20registration%20using%20a%20learning-based%20domain-specific%20landmark%20detection%20methodology&rft.jtitle=Computers%20in%20biology%20and%20medicine&rft.au=Rivas-Villar,%20David&rft.date=2022-01&rft.volume=140&rft.spage=105101&rft.epage=105101&rft.pages=105101-105101&rft.artnum=105101&rft.issn=0010-4825&rft.eissn=1879-0534&rft_id=info:doi/10.1016/j.compbiomed.2021.105101&rft_dat=%3Cproquest_cross%3E2608132147%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c452t-65892eefb0c530f8f421cb93fd44ecef9e32f6abfc9c4ebafb49668860fcd4723%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2615479867&rft_id=info:pmid/34875412&rfr_iscdi=true |