Loading…

Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning

This study presents a method for distress image classification in road infrastructures introducing self-supervised learning. Self-supervised learning is an unsupervised learning method that does not require class labels. This learning method can reduce annotation efforts and allow the application of...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Switzerland), 2023-02, Vol.23 (3), p.1657
Main Authors: Higashi, Takaaki, Ogawa, Naoki, Maeda, Keisuke, Ogawa, Takahiro, Haseyama, Miki
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c646t-ef8c4c2690a394a875fc5025c92cbf8edc4c60006f5065fdc39899b4ffba76883
cites cdi_FETCH-LOGICAL-c646t-ef8c4c2690a394a875fc5025c92cbf8edc4c60006f5065fdc39899b4ffba76883
container_end_page
container_issue 3
container_start_page 1657
container_title Sensors (Basel, Switzerland)
container_volume 23
creator Higashi, Takaaki
Ogawa, Naoki
Maeda, Keisuke
Ogawa, Takahiro
Haseyama, Miki
description This study presents a method for distress image classification in road infrastructures introducing self-supervised learning. Self-supervised learning is an unsupervised learning method that does not require class labels. This learning method can reduce annotation efforts and allow the application of machine learning to a large number of unlabeled images. We propose a novel distress image classification method using contrastive learning, which is a type of self-supervised learning. Contrastive learning provides image domain-specific representation, constraining such that similar images are embedded nearby in the latent space. We augment the single input distress image into multiple images by image transformations and construct the latent space, in which the augmented images are embedded close to each other. This provides a domain-specific representation of the damage in road infrastructure using a large number of unlabeled distress images. Finally, the representation obtained by contrastive learning is used to improve the distress image classification performance. The obtained contrastive learning model parameters are used for the distress image classification model. We realize the successful distress image representation by utilizing unlabeled distress images, which have been difficult to use in the past. In the experiments, we use the distress images obtained from the real world to verify the effectiveness of the proposed method for various distress types and confirm the performance improvement.
doi_str_mv 10.3390/s23031657
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_c07cea51b1834ff79122d2e7125f58e8</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A743368237</galeid><doaj_id>oai_doaj_org_article_c07cea51b1834ff79122d2e7125f58e8</doaj_id><sourcerecordid>A743368237</sourcerecordid><originalsourceid>FETCH-LOGICAL-c646t-ef8c4c2690a394a875fc5025c92cbf8edc4c60006f5065fdc39899b4ffba76883</originalsourceid><addsrcrecordid>eNptkk1vEzEQhlcIRD_gwB9AlrjAYYvXXn9dkNJQIFIKEqJny_GOg8PGbu3dSvz7ekmJEoR88HjmmXc8o6mqVw2-oFTh95lQTBvOxJPqtGlJW0tC8NMD-6Q6y3mDMaGUyufVCeVCEK7a0-rXVR781gw-BhQd-gjrZLrdc7IBkA_oezQdWgSXTB7SaIcxAbo0GTpUsOuxH3x9HTvTo9nlV3STfVijeQzDhPt7QEswKRTni-qZM32Gl4_3eXXz6erH_Eu9_PZ5MZ8ta8tbPtTgpG1t-R42VLVGCuYsw4RZRezKSehKlGOMuWOYM9dZqqRSq9a5lRFcSnpeLXa6XTQbfZtKf-m3jsbrP46Y1tqkwdsetMXCgmHNqpG0CAjVENIREA1hjkmYtD7stG7H1baUhqmt_kj0OBL8T72O91qpRmEyCbx9FEjxboQ86K3PFvreBIhj1kQIxgmRjBb0zT_oJo4plFFNVKsEbdsDam1KAz64WOraSVTPRIlzSago1MV_qHI62HobAzhf_EcJ73YJNsWcE7h9jw3W05bp_ZYV9vXhUPbk37WiD8HwyuM</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2774973443</pqid></control><display><type>article</type><title>Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning</title><source>Open Access: PubMed Central</source><source>Publicly Available Content (ProQuest)</source><creator>Higashi, Takaaki ; Ogawa, Naoki ; Maeda, Keisuke ; Ogawa, Takahiro ; Haseyama, Miki</creator><creatorcontrib>Higashi, Takaaki ; Ogawa, Naoki ; Maeda, Keisuke ; Ogawa, Takahiro ; Haseyama, Miki</creatorcontrib><description>This study presents a method for distress image classification in road infrastructures introducing self-supervised learning. Self-supervised learning is an unsupervised learning method that does not require class labels. This learning method can reduce annotation efforts and allow the application of machine learning to a large number of unlabeled images. We propose a novel distress image classification method using contrastive learning, which is a type of self-supervised learning. Contrastive learning provides image domain-specific representation, constraining such that similar images are embedded nearby in the latent space. We augment the single input distress image into multiple images by image transformations and construct the latent space, in which the augmented images are embedded close to each other. This provides a domain-specific representation of the damage in road infrastructure using a large number of unlabeled distress images. Finally, the representation obtained by contrastive learning is used to improve the distress image classification performance. The obtained contrastive learning model parameters are used for the distress image classification model. We realize the successful distress image representation by utilizing unlabeled distress images, which have been difficult to use in the past. In the experiments, we use the distress images obtained from the real world to verify the effectiveness of the proposed method for various distress types and confirm the performance improvement.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s23031657</identifier><identifier>PMID: 36772694</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>Annotations ; Artificial intelligence ; Classification ; contrastive learning ; convolutional neural network ; Datasets ; Deep learning ; distress image classification ; Image classification ; Image interpretation, Computer assisted ; Infrastructure ; Maintenance and repair ; Medical research ; Methods ; multi-modal learning ; Neural networks ; Remote sensing ; Representations ; Roads ; Roads &amp; highways ; self-supervised learning ; Unsupervised learning</subject><ispartof>Sensors (Basel, Switzerland), 2023-02, Vol.23 (3), p.1657</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2023 by the authors. 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c646t-ef8c4c2690a394a875fc5025c92cbf8edc4c60006f5065fdc39899b4ffba76883</citedby><cites>FETCH-LOGICAL-c646t-ef8c4c2690a394a875fc5025c92cbf8edc4c60006f5065fdc39899b4ffba76883</cites><orcidid>0000-0001-5332-8112 ; 0000-0001-8039-3462 ; 0000-0002-3830-3352 ; 0000-0002-3884-7325</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2774973443/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2774973443?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25753,27924,27925,37012,37013,44590,53791,53793,74998</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36772694$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Higashi, Takaaki</creatorcontrib><creatorcontrib>Ogawa, Naoki</creatorcontrib><creatorcontrib>Maeda, Keisuke</creatorcontrib><creatorcontrib>Ogawa, Takahiro</creatorcontrib><creatorcontrib>Haseyama, Miki</creatorcontrib><title>Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>This study presents a method for distress image classification in road infrastructures introducing self-supervised learning. Self-supervised learning is an unsupervised learning method that does not require class labels. This learning method can reduce annotation efforts and allow the application of machine learning to a large number of unlabeled images. We propose a novel distress image classification method using contrastive learning, which is a type of self-supervised learning. Contrastive learning provides image domain-specific representation, constraining such that similar images are embedded nearby in the latent space. We augment the single input distress image into multiple images by image transformations and construct the latent space, in which the augmented images are embedded close to each other. This provides a domain-specific representation of the damage in road infrastructure using a large number of unlabeled distress images. Finally, the representation obtained by contrastive learning is used to improve the distress image classification performance. The obtained contrastive learning model parameters are used for the distress image classification model. We realize the successful distress image representation by utilizing unlabeled distress images, which have been difficult to use in the past. In the experiments, we use the distress images obtained from the real world to verify the effectiveness of the proposed method for various distress types and confirm the performance improvement.</description><subject>Annotations</subject><subject>Artificial intelligence</subject><subject>Classification</subject><subject>contrastive learning</subject><subject>convolutional neural network</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>distress image classification</subject><subject>Image classification</subject><subject>Image interpretation, Computer assisted</subject><subject>Infrastructure</subject><subject>Maintenance and repair</subject><subject>Medical research</subject><subject>Methods</subject><subject>multi-modal learning</subject><subject>Neural networks</subject><subject>Remote sensing</subject><subject>Representations</subject><subject>Roads</subject><subject>Roads &amp; highways</subject><subject>self-supervised learning</subject><subject>Unsupervised learning</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNptkk1vEzEQhlcIRD_gwB9AlrjAYYvXXn9dkNJQIFIKEqJny_GOg8PGbu3dSvz7ekmJEoR88HjmmXc8o6mqVw2-oFTh95lQTBvOxJPqtGlJW0tC8NMD-6Q6y3mDMaGUyufVCeVCEK7a0-rXVR781gw-BhQd-gjrZLrdc7IBkA_oezQdWgSXTB7SaIcxAbo0GTpUsOuxH3x9HTvTo9nlV3STfVijeQzDhPt7QEswKRTni-qZM32Gl4_3eXXz6erH_Eu9_PZ5MZ8ta8tbPtTgpG1t-R42VLVGCuYsw4RZRezKSehKlGOMuWOYM9dZqqRSq9a5lRFcSnpeLXa6XTQbfZtKf-m3jsbrP46Y1tqkwdsetMXCgmHNqpG0CAjVENIREA1hjkmYtD7stG7H1baUhqmt_kj0OBL8T72O91qpRmEyCbx9FEjxboQ86K3PFvreBIhj1kQIxgmRjBb0zT_oJo4plFFNVKsEbdsDam1KAz64WOraSVTPRIlzSago1MV_qHI62HobAzhf_EcJ73YJNsWcE7h9jw3W05bp_ZYV9vXhUPbk37WiD8HwyuM</recordid><startdate>20230202</startdate><enddate>20230202</enddate><creator>Higashi, Takaaki</creator><creator>Ogawa, Naoki</creator><creator>Maeda, Keisuke</creator><creator>Ogawa, Takahiro</creator><creator>Haseyama, Miki</creator><general>MDPI AG</general><general>MDPI</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-5332-8112</orcidid><orcidid>https://orcid.org/0000-0001-8039-3462</orcidid><orcidid>https://orcid.org/0000-0002-3830-3352</orcidid><orcidid>https://orcid.org/0000-0002-3884-7325</orcidid></search><sort><creationdate>20230202</creationdate><title>Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning</title><author>Higashi, Takaaki ; Ogawa, Naoki ; Maeda, Keisuke ; Ogawa, Takahiro ; Haseyama, Miki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c646t-ef8c4c2690a394a875fc5025c92cbf8edc4c60006f5065fdc39899b4ffba76883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Annotations</topic><topic>Artificial intelligence</topic><topic>Classification</topic><topic>contrastive learning</topic><topic>convolutional neural network</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>distress image classification</topic><topic>Image classification</topic><topic>Image interpretation, Computer assisted</topic><topic>Infrastructure</topic><topic>Maintenance and repair</topic><topic>Medical research</topic><topic>Methods</topic><topic>multi-modal learning</topic><topic>Neural networks</topic><topic>Remote sensing</topic><topic>Representations</topic><topic>Roads</topic><topic>Roads &amp; highways</topic><topic>self-supervised learning</topic><topic>Unsupervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Higashi, Takaaki</creatorcontrib><creatorcontrib>Ogawa, Naoki</creatorcontrib><creatorcontrib>Maeda, Keisuke</creatorcontrib><creatorcontrib>Ogawa, Takahiro</creatorcontrib><creatorcontrib>Haseyama, Miki</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Open Access: DOAJ - Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Higashi, Takaaki</au><au>Ogawa, Naoki</au><au>Maeda, Keisuke</au><au>Ogawa, Takahiro</au><au>Haseyama, Miki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2023-02-02</date><risdate>2023</risdate><volume>23</volume><issue>3</issue><spage>1657</spage><pages>1657-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>This study presents a method for distress image classification in road infrastructures introducing self-supervised learning. Self-supervised learning is an unsupervised learning method that does not require class labels. This learning method can reduce annotation efforts and allow the application of machine learning to a large number of unlabeled images. We propose a novel distress image classification method using contrastive learning, which is a type of self-supervised learning. Contrastive learning provides image domain-specific representation, constraining such that similar images are embedded nearby in the latent space. We augment the single input distress image into multiple images by image transformations and construct the latent space, in which the augmented images are embedded close to each other. This provides a domain-specific representation of the damage in road infrastructure using a large number of unlabeled distress images. Finally, the representation obtained by contrastive learning is used to improve the distress image classification performance. The obtained contrastive learning model parameters are used for the distress image classification model. We realize the successful distress image representation by utilizing unlabeled distress images, which have been difficult to use in the past. In the experiments, we use the distress images obtained from the real world to verify the effectiveness of the proposed method for various distress types and confirm the performance improvement.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>36772694</pmid><doi>10.3390/s23031657</doi><orcidid>https://orcid.org/0000-0001-5332-8112</orcidid><orcidid>https://orcid.org/0000-0001-8039-3462</orcidid><orcidid>https://orcid.org/0000-0002-3830-3352</orcidid><orcidid>https://orcid.org/0000-0002-3884-7325</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1424-8220
ispartof Sensors (Basel, Switzerland), 2023-02, Vol.23 (3), p.1657
issn 1424-8220
1424-8220
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_c07cea51b1834ff79122d2e7125f58e8
source Open Access: PubMed Central; Publicly Available Content (ProQuest)
subjects Annotations
Artificial intelligence
Classification
contrastive learning
convolutional neural network
Datasets
Deep learning
distress image classification
Image classification
Image interpretation, Computer assisted
Infrastructure
Maintenance and repair
Medical research
Methods
multi-modal learning
Neural networks
Remote sensing
Representations
Roads
Roads & highways
self-supervised learning
Unsupervised learning
title Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T20%3A39%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Estimation%20of%20Degradation%20Degree%20in%20Road%20Infrastructure%20Based%20on%20Multi-Modal%20ABN%20Using%20Contrastive%20Learning&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Higashi,%20Takaaki&rft.date=2023-02-02&rft.volume=23&rft.issue=3&rft.spage=1657&rft.pages=1657-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s23031657&rft_dat=%3Cgale_doaj_%3EA743368237%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c646t-ef8c4c2690a394a875fc5025c92cbf8edc4c60006f5065fdc39899b4ffba76883%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2774973443&rft_id=info:pmid/36772694&rft_galeid=A743368237&rfr_iscdi=true