Loading…

Matching fusion framework on multi-modal data for glaucoma severity diagnosis

In response to the challenge of limited medical datasets hindering the effective extraction of crucial features for glaucoma severity diagnosis, we propose a novel matching fusion framework based on multi-modal data. In the framework, firstly, a matching fusion method that matches different networks...

Full description

Saved in:
Bibliographic Details
Published in:Computers & electrical engineering 2025-04, Vol.123, p.109982, Article 109982
Main Authors: Yi, Sanli, Feng, Xueli
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c1102-73db0ed2634c8047e0741a95c0f9e62fc0c5434e562900ac8267d37f88275fa03
container_end_page
container_issue
container_start_page 109982
container_title Computers & electrical engineering
container_volume 123
creator Yi, Sanli
Feng, Xueli
description In response to the challenge of limited medical datasets hindering the effective extraction of crucial features for glaucoma severity diagnosis, we propose a novel matching fusion framework based on multi-modal data. In the framework, firstly, a matching fusion method that matches different networks for different types of datasets is proposed. Secondly, two fusion strategies based on the matching fusion method are designed, multi-data fusion strategy and multi-model fusion strategy. Thirdly, four fusion methods based on three glaucoma data types are designed. Lastly, a new fusion classifier for the obtained fusion feature map is proposed,which is mainly composed of a fusion feature convolution block and a multi-layer perceptron block. To verify the framework, we conducted glaucoma classification experiments on the datasets collected from the First Affiliated Hospital of Kunming Medical University, achieving an accuracy of 0.965, an F1 score of 0.966, and an AUC of 0.997. The results demonstrated that our framework outperformed state-of-the-art methods.
doi_str_mv 10.1016/j.compeleceng.2024.109982
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_compeleceng_2024_109982</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0045790624009078</els_id><sourcerecordid>S0045790624009078</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1102-73db0ed2634c8047e0741a95c0f9e62fc0c5434e562900ac8267d37f88275fa03</originalsourceid><addsrcrecordid>eNqNkMtOwzAQRb0AiVL4B_MBKWPHceIlqnhJrdjA2jL2OLgkcWWnRf17UrULlqxGd6T70CHkjsGCAZP3m4WN_RY7tDi0Cw5cTH-lGn5BZgCiKmoF8opc57yBSUvWzMh6bUb7FYaW-l0OcaA-mR5_Yvqmk-h33RiKPjrTUWdGQ31MtO3MbioyNOMeUxgP1AXTDjGHfEMuveky3p7vnHw8Pb4vX4rV2_Pr8mFVWMaAF3XpPgEdl6WwDYgaoRbMqMqCVyi5t2ArUQqsJFcAxjZc1q6sfdPwuvIGyjlRp1ybYs4Jvd6m0Jt00Az0kYXe6D8s9JGFPrGYvMuTF6eB-4BJZxtwsOhCQjtqF8M_Un4BEUFwWQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Matching fusion framework on multi-modal data for glaucoma severity diagnosis</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Yi, Sanli ; Feng, Xueli</creator><creatorcontrib>Yi, Sanli ; Feng, Xueli</creatorcontrib><description>In response to the challenge of limited medical datasets hindering the effective extraction of crucial features for glaucoma severity diagnosis, we propose a novel matching fusion framework based on multi-modal data. In the framework, firstly, a matching fusion method that matches different networks for different types of datasets is proposed. Secondly, two fusion strategies based on the matching fusion method are designed, multi-data fusion strategy and multi-model fusion strategy. Thirdly, four fusion methods based on three glaucoma data types are designed. Lastly, a new fusion classifier for the obtained fusion feature map is proposed,which is mainly composed of a fusion feature convolution block and a multi-layer perceptron block. To verify the framework, we conducted glaucoma classification experiments on the datasets collected from the First Affiliated Hospital of Kunming Medical University, achieving an accuracy of 0.965, an F1 score of 0.966, and an AUC of 0.997. The results demonstrated that our framework outperformed state-of-the-art methods.</description><identifier>ISSN: 0045-7906</identifier><identifier>DOI: 10.1016/j.compeleceng.2024.109982</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>CNNs ; Matching fusion ; Multi-modal data ; Severity glaucoma diagnosis</subject><ispartof>Computers &amp; electrical engineering, 2025-04, Vol.123, p.109982, Article 109982</ispartof><rights>2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1102-73db0ed2634c8047e0741a95c0f9e62fc0c5434e562900ac8267d37f88275fa03</cites><orcidid>0000-0001-5430-1085</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Yi, Sanli</creatorcontrib><creatorcontrib>Feng, Xueli</creatorcontrib><title>Matching fusion framework on multi-modal data for glaucoma severity diagnosis</title><title>Computers &amp; electrical engineering</title><description>In response to the challenge of limited medical datasets hindering the effective extraction of crucial features for glaucoma severity diagnosis, we propose a novel matching fusion framework based on multi-modal data. In the framework, firstly, a matching fusion method that matches different networks for different types of datasets is proposed. Secondly, two fusion strategies based on the matching fusion method are designed, multi-data fusion strategy and multi-model fusion strategy. Thirdly, four fusion methods based on three glaucoma data types are designed. Lastly, a new fusion classifier for the obtained fusion feature map is proposed,which is mainly composed of a fusion feature convolution block and a multi-layer perceptron block. To verify the framework, we conducted glaucoma classification experiments on the datasets collected from the First Affiliated Hospital of Kunming Medical University, achieving an accuracy of 0.965, an F1 score of 0.966, and an AUC of 0.997. The results demonstrated that our framework outperformed state-of-the-art methods.</description><subject>CNNs</subject><subject>Matching fusion</subject><subject>Multi-modal data</subject><subject>Severity glaucoma diagnosis</subject><issn>0045-7906</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNqNkMtOwzAQRb0AiVL4B_MBKWPHceIlqnhJrdjA2jL2OLgkcWWnRf17UrULlqxGd6T70CHkjsGCAZP3m4WN_RY7tDi0Cw5cTH-lGn5BZgCiKmoF8opc57yBSUvWzMh6bUb7FYaW-l0OcaA-mR5_Yvqmk-h33RiKPjrTUWdGQ31MtO3MbioyNOMeUxgP1AXTDjGHfEMuveky3p7vnHw8Pb4vX4rV2_Pr8mFVWMaAF3XpPgEdl6WwDYgaoRbMqMqCVyi5t2ArUQqsJFcAxjZc1q6sfdPwuvIGyjlRp1ybYs4Jvd6m0Jt00Az0kYXe6D8s9JGFPrGYvMuTF6eB-4BJZxtwsOhCQjtqF8M_Un4BEUFwWQ</recordid><startdate>202504</startdate><enddate>202504</enddate><creator>Yi, Sanli</creator><creator>Feng, Xueli</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-5430-1085</orcidid></search><sort><creationdate>202504</creationdate><title>Matching fusion framework on multi-modal data for glaucoma severity diagnosis</title><author>Yi, Sanli ; Feng, Xueli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1102-73db0ed2634c8047e0741a95c0f9e62fc0c5434e562900ac8267d37f88275fa03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>CNNs</topic><topic>Matching fusion</topic><topic>Multi-modal data</topic><topic>Severity glaucoma diagnosis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yi, Sanli</creatorcontrib><creatorcontrib>Feng, Xueli</creatorcontrib><collection>CrossRef</collection><jtitle>Computers &amp; electrical engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yi, Sanli</au><au>Feng, Xueli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Matching fusion framework on multi-modal data for glaucoma severity diagnosis</atitle><jtitle>Computers &amp; electrical engineering</jtitle><date>2025-04</date><risdate>2025</risdate><volume>123</volume><spage>109982</spage><pages>109982-</pages><artnum>109982</artnum><issn>0045-7906</issn><abstract>In response to the challenge of limited medical datasets hindering the effective extraction of crucial features for glaucoma severity diagnosis, we propose a novel matching fusion framework based on multi-modal data. In the framework, firstly, a matching fusion method that matches different networks for different types of datasets is proposed. Secondly, two fusion strategies based on the matching fusion method are designed, multi-data fusion strategy and multi-model fusion strategy. Thirdly, four fusion methods based on three glaucoma data types are designed. Lastly, a new fusion classifier for the obtained fusion feature map is proposed,which is mainly composed of a fusion feature convolution block and a multi-layer perceptron block. To verify the framework, we conducted glaucoma classification experiments on the datasets collected from the First Affiliated Hospital of Kunming Medical University, achieving an accuracy of 0.965, an F1 score of 0.966, and an AUC of 0.997. The results demonstrated that our framework outperformed state-of-the-art methods.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.compeleceng.2024.109982</doi><orcidid>https://orcid.org/0000-0001-5430-1085</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0045-7906
ispartof Computers & electrical engineering, 2025-04, Vol.123, p.109982, Article 109982
issn 0045-7906
language eng
recordid cdi_crossref_primary_10_1016_j_compeleceng_2024_109982
source ScienceDirect Freedom Collection 2022-2024
subjects CNNs
Matching fusion
Multi-modal data
Severity glaucoma diagnosis
title Matching fusion framework on multi-modal data for glaucoma severity diagnosis
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T02%3A56%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Matching%20fusion%20framework%20on%20multi-modal%20data%20for%20glaucoma%20severity%20diagnosis&rft.jtitle=Computers%20&%20electrical%20engineering&rft.au=Yi,%20Sanli&rft.date=2025-04&rft.volume=123&rft.spage=109982&rft.pages=109982-&rft.artnum=109982&rft.issn=0045-7906&rft_id=info:doi/10.1016/j.compeleceng.2024.109982&rft_dat=%3Celsevier_cross%3ES0045790624009078%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c1102-73db0ed2634c8047e0741a95c0f9e62fc0c5434e562900ac8267d37f88275fa03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true