Loading…
A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training
Many deep learning-based computerized respiratory sound analysis methods have previously been developed. However, these studies focus on either lung sound only or tracheal sound only. The effectiveness of using a lung sound analysis algorithm on tracheal sound and vice versa has never been investiga...
Saved in:
Published in: | arXiv.org 2023-01 |
---|---|
Main Authors: | , , , , , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Fu-Shun Hsu Shang-Ran Huang Chang-Fu, Su Chien-Wen, Huang Yuan-Ren, Cheng Chen, Chun-Chieh Chun-Yu, Wu Chung-Wei, Chen Yen-Chun, Lai Tang-Wei, Cheng Lin, Nian-Jhen Wan-Ling, Tsai Ching-Shiang Lu Chen, Chuan Lai, Feipei |
description | Many deep learning-based computerized respiratory sound analysis methods have previously been developed. However, these studies focus on either lung sound only or tracheal sound only. The effectiveness of using a lung sound analysis algorithm on tracheal sound and vice versa has never been investigated. Furthermore, no one knows whether using lung and tracheal sounds together in training a respiratory sound analysis model is beneficial. In this study, we first constructed a tracheal sound database, HF_Tracheal_V1, containing 10448 15-s tracheal sound recordings, 21741 inhalation labels, 15858 exhalation labels, and 6414 continuous adventitious sound (CAS) labels. HF_Tracheal_V1 and our previously built lung sound database, HF_Lung_V2, were either combined (mixed set), used one after the other (domain adaptation), or used alone to train convolutional neural network bidirectional gate recurrent unit models for inhalation, exhalation, and CAS detection in lung and tracheal sounds. The results revealed that the models trained using lung sound alone performed poorly in tracheal sound analysis and vice versa. However, mixed set training or domain adaptation improved the performance for 1) inhalation and exhalation detection in lung sounds and 2) inhalation, exhalation, and CAS detection in tracheal sounds compared to positive controls (the models trained using lung sound alone and used in lung sound analysis and vice versa). In particular, the model trained on the mixed set had great flexibility to serve two purposes, lung and tracheal sound analyses, at the same time. |
doi_str_mv | 10.48550/arxiv.2107.04229 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2550477985</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2550477985</sourcerecordid><originalsourceid>FETCH-LOGICAL-a959-adb1d6bb4969fb23889dac99a10a3c2c1320a366b3a57ff614ee02d41f8cd3763</originalsourceid><addsrcrecordid>eNotT19LwzAcDILgmPsAvgV87sz_No91Uyd0KKzv49cm1Y7S1KSR-e3N0Ke7447jDqE7StaikJI8gD_332tGSb4mgjF9hRaMc5oVSdygVQgnQghTOZOSL5Ar8TbCkL1HP7lg8dbaCVcW_NiPH3jvjB1w5zwuY2jjMMNsDa5ismA0uPbQfloY8MHFJMsRhp_QB_wIIcXciPf9OZGDnS_R_lJ5i647GIJd_eMS1c9P9WaXVW8vr5uyykBLnYFpqFFNI7TSXcN4UWgDrdZACfCWtZSzRJRqOMi86xQV1hJmBO2K1vBc8SW6_6udvPuKNszHk4s-7QvH9JuIPNeF5L8ApFrK</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2550477985</pqid></control><display><type>article</type><title>A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training</title><source>Publicly Available Content (ProQuest)</source><creator>Fu-Shun Hsu ; Shang-Ran Huang ; Chang-Fu, Su ; Chien-Wen, Huang ; Yuan-Ren, Cheng ; Chen, Chun-Chieh ; Chun-Yu, Wu ; Chung-Wei, Chen ; Yen-Chun, Lai ; Tang-Wei, Cheng ; Lin, Nian-Jhen ; Wan-Ling, Tsai ; Ching-Shiang Lu ; Chen, Chuan ; Lai, Feipei</creator><creatorcontrib>Fu-Shun Hsu ; Shang-Ran Huang ; Chang-Fu, Su ; Chien-Wen, Huang ; Yuan-Ren, Cheng ; Chen, Chun-Chieh ; Chun-Yu, Wu ; Chung-Wei, Chen ; Yen-Chun, Lai ; Tang-Wei, Cheng ; Lin, Nian-Jhen ; Wan-Ling, Tsai ; Ching-Shiang Lu ; Chen, Chuan ; Lai, Feipei</creatorcontrib><description>Many deep learning-based computerized respiratory sound analysis methods have previously been developed. However, these studies focus on either lung sound only or tracheal sound only. The effectiveness of using a lung sound analysis algorithm on tracheal sound and vice versa has never been investigated. Furthermore, no one knows whether using lung and tracheal sounds together in training a respiratory sound analysis model is beneficial. In this study, we first constructed a tracheal sound database, HF_Tracheal_V1, containing 10448 15-s tracheal sound recordings, 21741 inhalation labels, 15858 exhalation labels, and 6414 continuous adventitious sound (CAS) labels. HF_Tracheal_V1 and our previously built lung sound database, HF_Lung_V2, were either combined (mixed set), used one after the other (domain adaptation), or used alone to train convolutional neural network bidirectional gate recurrent unit models for inhalation, exhalation, and CAS detection in lung and tracheal sounds. The results revealed that the models trained using lung sound alone performed poorly in tracheal sound analysis and vice versa. However, mixed set training or domain adaptation improved the performance for 1) inhalation and exhalation detection in lung sounds and 2) inhalation, exhalation, and CAS detection in tracheal sounds compared to positive controls (the models trained using lung sound alone and used in lung sound analysis and vice versa). In particular, the model trained on the mixed set had great flexibility to serve two purposes, lung and tracheal sound analyses, at the same time.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2107.04229</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptation ; Deep learning ; Domains ; Exhalation ; Labels ; Lungs ; Respiration ; Sound ; Sound recordings ; Training</subject><ispartof>arXiv.org, 2023-01</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2550477985?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,27924,37011,44589</link.rule.ids></links><search><creatorcontrib>Fu-Shun Hsu</creatorcontrib><creatorcontrib>Shang-Ran Huang</creatorcontrib><creatorcontrib>Chang-Fu, Su</creatorcontrib><creatorcontrib>Chien-Wen, Huang</creatorcontrib><creatorcontrib>Yuan-Ren, Cheng</creatorcontrib><creatorcontrib>Chen, Chun-Chieh</creatorcontrib><creatorcontrib>Chun-Yu, Wu</creatorcontrib><creatorcontrib>Chung-Wei, Chen</creatorcontrib><creatorcontrib>Yen-Chun, Lai</creatorcontrib><creatorcontrib>Tang-Wei, Cheng</creatorcontrib><creatorcontrib>Lin, Nian-Jhen</creatorcontrib><creatorcontrib>Wan-Ling, Tsai</creatorcontrib><creatorcontrib>Ching-Shiang Lu</creatorcontrib><creatorcontrib>Chen, Chuan</creatorcontrib><creatorcontrib>Lai, Feipei</creatorcontrib><title>A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training</title><title>arXiv.org</title><description>Many deep learning-based computerized respiratory sound analysis methods have previously been developed. However, these studies focus on either lung sound only or tracheal sound only. The effectiveness of using a lung sound analysis algorithm on tracheal sound and vice versa has never been investigated. Furthermore, no one knows whether using lung and tracheal sounds together in training a respiratory sound analysis model is beneficial. In this study, we first constructed a tracheal sound database, HF_Tracheal_V1, containing 10448 15-s tracheal sound recordings, 21741 inhalation labels, 15858 exhalation labels, and 6414 continuous adventitious sound (CAS) labels. HF_Tracheal_V1 and our previously built lung sound database, HF_Lung_V2, were either combined (mixed set), used one after the other (domain adaptation), or used alone to train convolutional neural network bidirectional gate recurrent unit models for inhalation, exhalation, and CAS detection in lung and tracheal sounds. The results revealed that the models trained using lung sound alone performed poorly in tracheal sound analysis and vice versa. However, mixed set training or domain adaptation improved the performance for 1) inhalation and exhalation detection in lung sounds and 2) inhalation, exhalation, and CAS detection in tracheal sounds compared to positive controls (the models trained using lung sound alone and used in lung sound analysis and vice versa). In particular, the model trained on the mixed set had great flexibility to serve two purposes, lung and tracheal sound analyses, at the same time.</description><subject>Adaptation</subject><subject>Deep learning</subject><subject>Domains</subject><subject>Exhalation</subject><subject>Labels</subject><subject>Lungs</subject><subject>Respiration</subject><subject>Sound</subject><subject>Sound recordings</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotT19LwzAcDILgmPsAvgV87sz_No91Uyd0KKzv49cm1Y7S1KSR-e3N0Ke7447jDqE7StaikJI8gD_332tGSb4mgjF9hRaMc5oVSdygVQgnQghTOZOSL5Ar8TbCkL1HP7lg8dbaCVcW_NiPH3jvjB1w5zwuY2jjMMNsDa5ismA0uPbQfloY8MHFJMsRhp_QB_wIIcXciPf9OZGDnS_R_lJ5i647GIJd_eMS1c9P9WaXVW8vr5uyykBLnYFpqFFNI7TSXcN4UWgDrdZACfCWtZSzRJRqOMi86xQV1hJmBO2K1vBc8SW6_6udvPuKNszHk4s-7QvH9JuIPNeF5L8ApFrK</recordid><startdate>20230104</startdate><enddate>20230104</enddate><creator>Fu-Shun Hsu</creator><creator>Shang-Ran Huang</creator><creator>Chang-Fu, Su</creator><creator>Chien-Wen, Huang</creator><creator>Yuan-Ren, Cheng</creator><creator>Chen, Chun-Chieh</creator><creator>Chun-Yu, Wu</creator><creator>Chung-Wei, Chen</creator><creator>Yen-Chun, Lai</creator><creator>Tang-Wei, Cheng</creator><creator>Lin, Nian-Jhen</creator><creator>Wan-Ling, Tsai</creator><creator>Ching-Shiang Lu</creator><creator>Chen, Chuan</creator><creator>Lai, Feipei</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230104</creationdate><title>A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training</title><author>Fu-Shun Hsu ; Shang-Ran Huang ; Chang-Fu, Su ; Chien-Wen, Huang ; Yuan-Ren, Cheng ; Chen, Chun-Chieh ; Chun-Yu, Wu ; Chung-Wei, Chen ; Yen-Chun, Lai ; Tang-Wei, Cheng ; Lin, Nian-Jhen ; Wan-Ling, Tsai ; Ching-Shiang Lu ; Chen, Chuan ; Lai, Feipei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a959-adb1d6bb4969fb23889dac99a10a3c2c1320a366b3a57ff614ee02d41f8cd3763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation</topic><topic>Deep learning</topic><topic>Domains</topic><topic>Exhalation</topic><topic>Labels</topic><topic>Lungs</topic><topic>Respiration</topic><topic>Sound</topic><topic>Sound recordings</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Fu-Shun Hsu</creatorcontrib><creatorcontrib>Shang-Ran Huang</creatorcontrib><creatorcontrib>Chang-Fu, Su</creatorcontrib><creatorcontrib>Chien-Wen, Huang</creatorcontrib><creatorcontrib>Yuan-Ren, Cheng</creatorcontrib><creatorcontrib>Chen, Chun-Chieh</creatorcontrib><creatorcontrib>Chun-Yu, Wu</creatorcontrib><creatorcontrib>Chung-Wei, Chen</creatorcontrib><creatorcontrib>Yen-Chun, Lai</creatorcontrib><creatorcontrib>Tang-Wei, Cheng</creatorcontrib><creatorcontrib>Lin, Nian-Jhen</creatorcontrib><creatorcontrib>Wan-Ling, Tsai</creatorcontrib><creatorcontrib>Ching-Shiang Lu</creatorcontrib><creatorcontrib>Chen, Chuan</creatorcontrib><creatorcontrib>Lai, Feipei</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fu-Shun Hsu</au><au>Shang-Ran Huang</au><au>Chang-Fu, Su</au><au>Chien-Wen, Huang</au><au>Yuan-Ren, Cheng</au><au>Chen, Chun-Chieh</au><au>Chun-Yu, Wu</au><au>Chung-Wei, Chen</au><au>Yen-Chun, Lai</au><au>Tang-Wei, Cheng</au><au>Lin, Nian-Jhen</au><au>Wan-Ling, Tsai</au><au>Ching-Shiang Lu</au><au>Chen, Chuan</au><au>Lai, Feipei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training</atitle><jtitle>arXiv.org</jtitle><date>2023-01-04</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Many deep learning-based computerized respiratory sound analysis methods have previously been developed. However, these studies focus on either lung sound only or tracheal sound only. The effectiveness of using a lung sound analysis algorithm on tracheal sound and vice versa has never been investigated. Furthermore, no one knows whether using lung and tracheal sounds together in training a respiratory sound analysis model is beneficial. In this study, we first constructed a tracheal sound database, HF_Tracheal_V1, containing 10448 15-s tracheal sound recordings, 21741 inhalation labels, 15858 exhalation labels, and 6414 continuous adventitious sound (CAS) labels. HF_Tracheal_V1 and our previously built lung sound database, HF_Lung_V2, were either combined (mixed set), used one after the other (domain adaptation), or used alone to train convolutional neural network bidirectional gate recurrent unit models for inhalation, exhalation, and CAS detection in lung and tracheal sounds. The results revealed that the models trained using lung sound alone performed poorly in tracheal sound analysis and vice versa. However, mixed set training or domain adaptation improved the performance for 1) inhalation and exhalation detection in lung sounds and 2) inhalation, exhalation, and CAS detection in tracheal sounds compared to positive controls (the models trained using lung sound alone and used in lung sound analysis and vice versa). In particular, the model trained on the mixed set had great flexibility to serve two purposes, lung and tracheal sound analyses, at the same time.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2107.04229</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2550477985 |
source | Publicly Available Content (ProQuest) |
subjects | Adaptation Deep learning Domains Exhalation Labels Lungs Respiration Sound Sound recordings Training |
title | A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T17%3A07%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Dual-Purpose%20Deep%20Learning%20Model%20for%20Auscultated%20Lung%20and%20Tracheal%20Sound%20Analysis%20Based%20on%20Mixed%20Set%20Training&rft.jtitle=arXiv.org&rft.au=Fu-Shun%20Hsu&rft.date=2023-01-04&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2107.04229&rft_dat=%3Cproquest%3E2550477985%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a959-adb1d6bb4969fb23889dac99a10a3c2c1320a366b3a57ff614ee02d41f8cd3763%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2550477985&rft_id=info:pmid/&rfr_iscdi=true |