Loading…
Semi-Supervised Learning for Image Classification using Compact Networks in the BioMedical Context
The development of mobile and on the edge applications that embed deep convolutional neural models has the potential to revolutionise biomedicine. However, most deep learning models require computational resources that are not available in smartphones or edge devices; an issue that can be faced by m...
Saved in:
Published in: | arXiv.org 2022-05 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Adrián Inés Díaz-Pinto, Andrés Domínguez, César Heras, Jónathan Mata, Eloy Vico Pascual |
description | The development of mobile and on the edge applications that embed deep convolutional neural models has the potential to revolutionise biomedicine. However, most deep learning models require computational resources that are not available in smartphones or edge devices; an issue that can be faced by means of compact models. The problem with such models is that they are, at least usually, less accurate than bigger models. In this work, we study how this limitation can be addressed with the application of semi-supervised learning techniques. We conduct several statistical analyses to compare performance of deep compact architectures when trained using semi-supervised learning methods for tackling image classification tasks in the biomedical context. In particular, we explore three families of compact networks, and two families of semi-supervised learning techniques for 10 biomedical tasks. By combining semi-supervised learning methods with compact networks, it is possible to obtain a similar performance to standard size networks. In general, the best results are obtained when combining data distillation with MixNet, and plain distillation with ResNet-18. Also, in general, NAS networks obtain better results than manually designed networks and quantized networks. The work presented in this paper shows the benefits of apply semi-supervised methods to compact networks; this allow us to create compact models that are not only as accurate as standard size models, but also faster and lighter. Finally, we have developed a library that simplifies the construction of compact models using semi-supervised learning methods. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2667072198</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2667072198</sourcerecordid><originalsourceid>FETCH-proquest_journals_26670721983</originalsourceid><addsrcrecordid>eNqNytEOwTAUgOFGIiF4h5O4XjIt29xaCAluuF9qO5uytdPT4vFN4gFc_Rf_12NDLsQsSOacD9iE6BaGIY9ivliIIbucsFHBybdon4qwgD1Kq5WuoDQWdo2sENJaEqlS5dIpo8HTd6emaWXu4IjuZeydQGlwV4SVMgcsOlt3RDt8uzHrl7ImnPw6YtPN-pxug9aah0dy2c14q7uV8SiKw5jPlon4T30Ag2tFog</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2667072198</pqid></control><display><type>article</type><title>Semi-Supervised Learning for Image Classification using Compact Networks in the BioMedical Context</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Adrián Inés ; Díaz-Pinto, Andrés ; Domínguez, César ; Heras, Jónathan ; Mata, Eloy ; Vico Pascual</creator><creatorcontrib>Adrián Inés ; Díaz-Pinto, Andrés ; Domínguez, César ; Heras, Jónathan ; Mata, Eloy ; Vico Pascual</creatorcontrib><description>The development of mobile and on the edge applications that embed deep convolutional neural models has the potential to revolutionise biomedicine. However, most deep learning models require computational resources that are not available in smartphones or edge devices; an issue that can be faced by means of compact models. The problem with such models is that they are, at least usually, less accurate than bigger models. In this work, we study how this limitation can be addressed with the application of semi-supervised learning techniques. We conduct several statistical analyses to compare performance of deep compact architectures when trained using semi-supervised learning methods for tackling image classification tasks in the biomedical context. In particular, we explore three families of compact networks, and two families of semi-supervised learning techniques for 10 biomedical tasks. By combining semi-supervised learning methods with compact networks, it is possible to obtain a similar performance to standard size networks. In general, the best results are obtained when combining data distillation with MixNet, and plain distillation with ResNet-18. Also, in general, NAS networks obtain better results than manually designed networks and quantized networks. The work presented in this paper shows the benefits of apply semi-supervised methods to compact networks; this allow us to create compact models that are not only as accurate as standard size models, but also faster and lighter. Finally, we have developed a library that simplifies the construction of compact models using semi-supervised learning methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Context ; Deep learning ; Distillation ; Image classification ; Machine learning ; Networks ; Semi-supervised learning ; Smartphones ; Statistical analysis ; Teaching methods</subject><ispartof>arXiv.org, 2022-05</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2667072198?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Adrián Inés</creatorcontrib><creatorcontrib>Díaz-Pinto, Andrés</creatorcontrib><creatorcontrib>Domínguez, César</creatorcontrib><creatorcontrib>Heras, Jónathan</creatorcontrib><creatorcontrib>Mata, Eloy</creatorcontrib><creatorcontrib>Vico Pascual</creatorcontrib><title>Semi-Supervised Learning for Image Classification using Compact Networks in the BioMedical Context</title><title>arXiv.org</title><description>The development of mobile and on the edge applications that embed deep convolutional neural models has the potential to revolutionise biomedicine. However, most deep learning models require computational resources that are not available in smartphones or edge devices; an issue that can be faced by means of compact models. The problem with such models is that they are, at least usually, less accurate than bigger models. In this work, we study how this limitation can be addressed with the application of semi-supervised learning techniques. We conduct several statistical analyses to compare performance of deep compact architectures when trained using semi-supervised learning methods for tackling image classification tasks in the biomedical context. In particular, we explore three families of compact networks, and two families of semi-supervised learning techniques for 10 biomedical tasks. By combining semi-supervised learning methods with compact networks, it is possible to obtain a similar performance to standard size networks. In general, the best results are obtained when combining data distillation with MixNet, and plain distillation with ResNet-18. Also, in general, NAS networks obtain better results than manually designed networks and quantized networks. The work presented in this paper shows the benefits of apply semi-supervised methods to compact networks; this allow us to create compact models that are not only as accurate as standard size models, but also faster and lighter. Finally, we have developed a library that simplifies the construction of compact models using semi-supervised learning methods.</description><subject>Context</subject><subject>Deep learning</subject><subject>Distillation</subject><subject>Image classification</subject><subject>Machine learning</subject><subject>Networks</subject><subject>Semi-supervised learning</subject><subject>Smartphones</subject><subject>Statistical analysis</subject><subject>Teaching methods</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNytEOwTAUgOFGIiF4h5O4XjIt29xaCAluuF9qO5uytdPT4vFN4gFc_Rf_12NDLsQsSOacD9iE6BaGIY9ivliIIbucsFHBybdon4qwgD1Kq5WuoDQWdo2sENJaEqlS5dIpo8HTd6emaWXu4IjuZeydQGlwV4SVMgcsOlt3RDt8uzHrl7ImnPw6YtPN-pxug9aah0dy2c14q7uV8SiKw5jPlon4T30Ag2tFog</recordid><startdate>20220519</startdate><enddate>20220519</enddate><creator>Adrián Inés</creator><creator>Díaz-Pinto, Andrés</creator><creator>Domínguez, César</creator><creator>Heras, Jónathan</creator><creator>Mata, Eloy</creator><creator>Vico Pascual</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220519</creationdate><title>Semi-Supervised Learning for Image Classification using Compact Networks in the BioMedical Context</title><author>Adrián Inés ; Díaz-Pinto, Andrés ; Domínguez, César ; Heras, Jónathan ; Mata, Eloy ; Vico Pascual</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26670721983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Context</topic><topic>Deep learning</topic><topic>Distillation</topic><topic>Image classification</topic><topic>Machine learning</topic><topic>Networks</topic><topic>Semi-supervised learning</topic><topic>Smartphones</topic><topic>Statistical analysis</topic><topic>Teaching methods</topic><toplevel>online_resources</toplevel><creatorcontrib>Adrián Inés</creatorcontrib><creatorcontrib>Díaz-Pinto, Andrés</creatorcontrib><creatorcontrib>Domínguez, César</creatorcontrib><creatorcontrib>Heras, Jónathan</creatorcontrib><creatorcontrib>Mata, Eloy</creatorcontrib><creatorcontrib>Vico Pascual</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Adrián Inés</au><au>Díaz-Pinto, Andrés</au><au>Domínguez, César</au><au>Heras, Jónathan</au><au>Mata, Eloy</au><au>Vico Pascual</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Semi-Supervised Learning for Image Classification using Compact Networks in the BioMedical Context</atitle><jtitle>arXiv.org</jtitle><date>2022-05-19</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The development of mobile and on the edge applications that embed deep convolutional neural models has the potential to revolutionise biomedicine. However, most deep learning models require computational resources that are not available in smartphones or edge devices; an issue that can be faced by means of compact models. The problem with such models is that they are, at least usually, less accurate than bigger models. In this work, we study how this limitation can be addressed with the application of semi-supervised learning techniques. We conduct several statistical analyses to compare performance of deep compact architectures when trained using semi-supervised learning methods for tackling image classification tasks in the biomedical context. In particular, we explore three families of compact networks, and two families of semi-supervised learning techniques for 10 biomedical tasks. By combining semi-supervised learning methods with compact networks, it is possible to obtain a similar performance to standard size networks. In general, the best results are obtained when combining data distillation with MixNet, and plain distillation with ResNet-18. Also, in general, NAS networks obtain better results than manually designed networks and quantized networks. The work presented in this paper shows the benefits of apply semi-supervised methods to compact networks; this allow us to create compact models that are not only as accurate as standard size models, but also faster and lighter. Finally, we have developed a library that simplifies the construction of compact models using semi-supervised learning methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2667072198 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Context Deep learning Distillation Image classification Machine learning Networks Semi-supervised learning Smartphones Statistical analysis Teaching methods |
title | Semi-Supervised Learning for Image Classification using Compact Networks in the BioMedical Context |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T02%3A36%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Semi-Supervised%20Learning%20for%20Image%20Classification%20using%20Compact%20Networks%20in%20the%20BioMedical%20Context&rft.jtitle=arXiv.org&rft.au=Adri%C3%A1n%20In%C3%A9s&rft.date=2022-05-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2667072198%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26670721983%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2667072198&rft_id=info:pmid/&rfr_iscdi=true |