Loading…
Concept graph embedding models for enhanced accuracy and interpretability
In fields requiring high accountability, it is necessary to understand how deep-learning models make decisions when analyzing the causes of image classification. Concept-based interpretation methods have recently been introduced to reveal the internal mechanisms of deep learning models using high-le...
Saved in:
Published in: | Machine learning: science and technology 2024-09, Vol.5 (3), p.35042 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c331t-ddbb76601f0bffec0ec3c414ea5b2bb79c05dc4be315cb4d61d32d3f78f19d8a3 |
container_end_page | |
container_issue | 3 |
container_start_page | 35042 |
container_title | Machine learning: science and technology |
container_volume | 5 |
creator | Kim, Sangwon Ko, Byoung Chul |
description | In fields requiring high accountability, it is necessary to understand how deep-learning models make decisions when analyzing the causes of image classification. Concept-based interpretation methods have recently been introduced to reveal the internal mechanisms of deep learning models using high-level concepts. However, such methods are constrained by a trade-off between accuracy and interpretability. For instance, in real-world environments, unlike in well-curated training data, the accurate prediction of expected concepts becomes a challenge owing to the various distortions and complexities introduced by different objects. To overcome this tradeoff, we propose concept graph embedding models (CGEM), reflecting the complex dependencies and structures among concepts through the learning of mutual directionalities. The concept graph convolutional neural network (Concept GCN), a downstream task of CGEM, differs from previous methods that solely determine the presence of concepts because it performs a final classification based on the relationships between con- cepts learned through graph embedding. This process endows the model with high resilience even in the presence of incorrect concepts. In addition, we utilize a deformable bipartite GCN for object- centric concept encoding in the earlier stages, which enhances the homogeneity of the concepts. The experimental results show that, based on deformable concept encoding, the CGEM mitigates the trade-off between task accuracy and interpretability. Moreover, it was confirmed that this approach allows the model to increase the resilience and interpretability while maintaining robustness against various real-world concept distortions and incorrect concept interventions. Our code is available at https://github.com/jumpsnack/cgem . |
doi_str_mv | 10.1088/2632-2153/ad6ad2 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1088_2632_2153_ad6ad2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_7247f9fdf2334da3a099e84f5c62de30</doaj_id><sourcerecordid>3092865098</sourcerecordid><originalsourceid>FETCH-LOGICAL-c331t-ddbb76601f0bffec0ec3c414ea5b2bb79c05dc4be315cb4d61d32d3f78f19d8a3</originalsourceid><addsrcrecordid>eNp9kDFv20AMhYWiAWK43jMK6NCljnlHnSyNhdEmBgx0SeYDdeTZMmSdepIH__vKUeFmKDqRID8-PrwkeVDwqKAoVjpHvdTK4Io4J9Yfktlt9PFdf58s-v4IANooNBpmyXYTWifdkO4jdYdUTpUw1-0-PQWWpk99iKm0BxohTsm5cyR3SanltG4HiV2Ugaq6qYfLp-TOU9PL4k-dJ68_vr9snpe7n0_bzbfd0iGqYclcVes8B-Wh8l4ciEOXqUzIVHpclQ4Mu6wSVMZVGeeKUTP6deFVyQXhPNlOuhzoaLtYnyhebKDavg1C3FuKQ-0asWudrX3p2WvEjAkJylKKzBuXaxaEUevzpNXF8Oss_WCP4Rzb0b5FKHWRGyiLkYKJcjH0fRR_-6rAXvO314DtNWA75T-efJ1O6tD91fwP_uUf-KkZDRmLFtBApm3HHn8D3m-Uxw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3092865098</pqid></control><display><type>article</type><title>Concept graph embedding models for enhanced accuracy and interpretability</title><source>Publicly Available Content Database</source><creator>Kim, Sangwon ; Ko, Byoung Chul</creator><creatorcontrib>Kim, Sangwon ; Ko, Byoung Chul</creatorcontrib><description>In fields requiring high accountability, it is necessary to understand how deep-learning models make decisions when analyzing the causes of image classification. Concept-based interpretation methods have recently been introduced to reveal the internal mechanisms of deep learning models using high-level concepts. However, such methods are constrained by a trade-off between accuracy and interpretability. For instance, in real-world environments, unlike in well-curated training data, the accurate prediction of expected concepts becomes a challenge owing to the various distortions and complexities introduced by different objects. To overcome this tradeoff, we propose concept graph embedding models (CGEM), reflecting the complex dependencies and structures among concepts through the learning of mutual directionalities. The concept graph convolutional neural network (Concept GCN), a downstream task of CGEM, differs from previous methods that solely determine the presence of concepts because it performs a final classification based on the relationships between con- cepts learned through graph embedding. This process endows the model with high resilience even in the presence of incorrect concepts. In addition, we utilize a deformable bipartite GCN for object- centric concept encoding in the earlier stages, which enhances the homogeneity of the concepts. The experimental results show that, based on deformable concept encoding, the CGEM mitigates the trade-off between task accuracy and interpretability. Moreover, it was confirmed that this approach allows the model to increase the resilience and interpretability while maintaining robustness against various real-world concept distortions and incorrect concept interventions. Our code is available at https://github.com/jumpsnack/cgem .</description><identifier>ISSN: 2632-2153</identifier><identifier>EISSN: 2632-2153</identifier><identifier>DOI: 10.1088/2632-2153/ad6ad2</identifier><identifier>CODEN: MLSTCK</identifier><language>eng</language><publisher>Bristol: IOP Publishing</publisher><subject>Accuracy ; Artificial neural networks ; Coding ; concept bottleneck model ; concept graph embedding ; Deep learning ; deformable bipartite ; Embedding ; Formability ; GCN ; Graph neural networks ; Graph theory ; Homogeneity ; Image classification ; Image enhancement ; interpretability ; Machine learning ; Resilience ; Tradeoffs</subject><ispartof>Machine learning: science and technology, 2024-09, Vol.5 (3), p.35042</ispartof><rights>2024 The Author(s). Published by IOP Publishing Ltd</rights><rights>2024 The Author(s). Published by IOP Publishing Ltd. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c331t-ddbb76601f0bffec0ec3c414ea5b2bb79c05dc4be315cb4d61d32d3f78f19d8a3</cites><orcidid>0000-0002-7452-3897 ; 0000-0002-7284-0768</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3092865098?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25752,27923,27924,37011,44589</link.rule.ids></links><search><creatorcontrib>Kim, Sangwon</creatorcontrib><creatorcontrib>Ko, Byoung Chul</creatorcontrib><title>Concept graph embedding models for enhanced accuracy and interpretability</title><title>Machine learning: science and technology</title><addtitle>MLST</addtitle><addtitle>Mach. Learn.: Sci. Technol</addtitle><description>In fields requiring high accountability, it is necessary to understand how deep-learning models make decisions when analyzing the causes of image classification. Concept-based interpretation methods have recently been introduced to reveal the internal mechanisms of deep learning models using high-level concepts. However, such methods are constrained by a trade-off between accuracy and interpretability. For instance, in real-world environments, unlike in well-curated training data, the accurate prediction of expected concepts becomes a challenge owing to the various distortions and complexities introduced by different objects. To overcome this tradeoff, we propose concept graph embedding models (CGEM), reflecting the complex dependencies and structures among concepts through the learning of mutual directionalities. The concept graph convolutional neural network (Concept GCN), a downstream task of CGEM, differs from previous methods that solely determine the presence of concepts because it performs a final classification based on the relationships between con- cepts learned through graph embedding. This process endows the model with high resilience even in the presence of incorrect concepts. In addition, we utilize a deformable bipartite GCN for object- centric concept encoding in the earlier stages, which enhances the homogeneity of the concepts. The experimental results show that, based on deformable concept encoding, the CGEM mitigates the trade-off between task accuracy and interpretability. Moreover, it was confirmed that this approach allows the model to increase the resilience and interpretability while maintaining robustness against various real-world concept distortions and incorrect concept interventions. Our code is available at https://github.com/jumpsnack/cgem .</description><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Coding</subject><subject>concept bottleneck model</subject><subject>concept graph embedding</subject><subject>Deep learning</subject><subject>deformable bipartite</subject><subject>Embedding</subject><subject>Formability</subject><subject>GCN</subject><subject>Graph neural networks</subject><subject>Graph theory</subject><subject>Homogeneity</subject><subject>Image classification</subject><subject>Image enhancement</subject><subject>interpretability</subject><subject>Machine learning</subject><subject>Resilience</subject><subject>Tradeoffs</subject><issn>2632-2153</issn><issn>2632-2153</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9kDFv20AMhYWiAWK43jMK6NCljnlHnSyNhdEmBgx0SeYDdeTZMmSdepIH__vKUeFmKDqRID8-PrwkeVDwqKAoVjpHvdTK4Io4J9Yfktlt9PFdf58s-v4IANooNBpmyXYTWifdkO4jdYdUTpUw1-0-PQWWpk99iKm0BxohTsm5cyR3SanltG4HiV2Ugaq6qYfLp-TOU9PL4k-dJ68_vr9snpe7n0_bzbfd0iGqYclcVes8B-Wh8l4ciEOXqUzIVHpclQ4Mu6wSVMZVGeeKUTP6deFVyQXhPNlOuhzoaLtYnyhebKDavg1C3FuKQ-0asWudrX3p2WvEjAkJylKKzBuXaxaEUevzpNXF8Oss_WCP4Rzb0b5FKHWRGyiLkYKJcjH0fRR_-6rAXvO314DtNWA75T-efJ1O6tD91fwP_uUf-KkZDRmLFtBApm3HHn8D3m-Uxw</recordid><startdate>20240901</startdate><enddate>20240901</enddate><creator>Kim, Sangwon</creator><creator>Ko, Byoung Chul</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7XB</scope><scope>88I</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>M2P</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-7452-3897</orcidid><orcidid>https://orcid.org/0000-0002-7284-0768</orcidid></search><sort><creationdate>20240901</creationdate><title>Concept graph embedding models for enhanced accuracy and interpretability</title><author>Kim, Sangwon ; Ko, Byoung Chul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c331t-ddbb76601f0bffec0ec3c414ea5b2bb79c05dc4be315cb4d61d32d3f78f19d8a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Coding</topic><topic>concept bottleneck model</topic><topic>concept graph embedding</topic><topic>Deep learning</topic><topic>deformable bipartite</topic><topic>Embedding</topic><topic>Formability</topic><topic>GCN</topic><topic>Graph neural networks</topic><topic>Graph theory</topic><topic>Homogeneity</topic><topic>Image classification</topic><topic>Image enhancement</topic><topic>interpretability</topic><topic>Machine learning</topic><topic>Resilience</topic><topic>Tradeoffs</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Sangwon</creatorcontrib><creatorcontrib>Ko, Byoung Chul</creatorcontrib><collection>Institute of Physics Open Access Journal Titles</collection><collection>IOPscience (Open Access)</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Science Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Machine learning: science and technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Sangwon</au><au>Ko, Byoung Chul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Concept graph embedding models for enhanced accuracy and interpretability</atitle><jtitle>Machine learning: science and technology</jtitle><stitle>MLST</stitle><addtitle>Mach. Learn.: Sci. Technol</addtitle><date>2024-09-01</date><risdate>2024</risdate><volume>5</volume><issue>3</issue><spage>35042</spage><pages>35042-</pages><issn>2632-2153</issn><eissn>2632-2153</eissn><coden>MLSTCK</coden><abstract>In fields requiring high accountability, it is necessary to understand how deep-learning models make decisions when analyzing the causes of image classification. Concept-based interpretation methods have recently been introduced to reveal the internal mechanisms of deep learning models using high-level concepts. However, such methods are constrained by a trade-off between accuracy and interpretability. For instance, in real-world environments, unlike in well-curated training data, the accurate prediction of expected concepts becomes a challenge owing to the various distortions and complexities introduced by different objects. To overcome this tradeoff, we propose concept graph embedding models (CGEM), reflecting the complex dependencies and structures among concepts through the learning of mutual directionalities. The concept graph convolutional neural network (Concept GCN), a downstream task of CGEM, differs from previous methods that solely determine the presence of concepts because it performs a final classification based on the relationships between con- cepts learned through graph embedding. This process endows the model with high resilience even in the presence of incorrect concepts. In addition, we utilize a deformable bipartite GCN for object- centric concept encoding in the earlier stages, which enhances the homogeneity of the concepts. The experimental results show that, based on deformable concept encoding, the CGEM mitigates the trade-off between task accuracy and interpretability. Moreover, it was confirmed that this approach allows the model to increase the resilience and interpretability while maintaining robustness against various real-world concept distortions and incorrect concept interventions. Our code is available at https://github.com/jumpsnack/cgem .</abstract><cop>Bristol</cop><pub>IOP Publishing</pub><doi>10.1088/2632-2153/ad6ad2</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-7452-3897</orcidid><orcidid>https://orcid.org/0000-0002-7284-0768</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2632-2153 |
ispartof | Machine learning: science and technology, 2024-09, Vol.5 (3), p.35042 |
issn | 2632-2153 2632-2153 |
language | eng |
recordid | cdi_crossref_primary_10_1088_2632_2153_ad6ad2 |
source | Publicly Available Content Database |
subjects | Accuracy Artificial neural networks Coding concept bottleneck model concept graph embedding Deep learning deformable bipartite Embedding Formability GCN Graph neural networks Graph theory Homogeneity Image classification Image enhancement interpretability Machine learning Resilience Tradeoffs |
title | Concept graph embedding models for enhanced accuracy and interpretability |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T15%3A44%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Concept%20graph%20embedding%20models%20for%20enhanced%20accuracy%20and%20interpretability&rft.jtitle=Machine%20learning:%20science%20and%20technology&rft.au=Kim,%20Sangwon&rft.date=2024-09-01&rft.volume=5&rft.issue=3&rft.spage=35042&rft.pages=35042-&rft.issn=2632-2153&rft.eissn=2632-2153&rft.coden=MLSTCK&rft_id=info:doi/10.1088/2632-2153/ad6ad2&rft_dat=%3Cproquest_cross%3E3092865098%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c331t-ddbb76601f0bffec0ec3c414ea5b2bb79c05dc4be315cb4d61d32d3f78f19d8a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3092865098&rft_id=info:pmid/&rfr_iscdi=true |