Loading…

UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization

Dysarthric speech reconstruction (DSR) systems aim to automatically convert dysarthric speech into normal-sounding speech. The technology eases communication with speakers affected by the neuromotor disorder and enhances their social inclusion. NED-based (Neural Encoder-Decoder) systems have signifi...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Yuejiao, Wu, Xixin, Wang, Disong, Meng, Lingwei, Meng, Helen
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 12310
container_issue
container_start_page 12306
container_title
container_volume
creator Wang, Yuejiao
Wu, Xixin
Wang, Disong
Meng, Lingwei
Meng, Helen
description Dysarthric speech reconstruction (DSR) systems aim to automatically convert dysarthric speech into normal-sounding speech. The technology eases communication with speakers affected by the neuromotor disorder and enhances their social inclusion. NED-based (Neural Encoder-Decoder) systems have significantly improved the intelligibility of the reconstructed speech as compared with GAN-based (Generative Adversarial Network) approaches, but the approach is still limited by training inefficiency caused by the cascaded pipeline and auxiliary tasks of the content encoder, which may in turn affect the quality of reconstruction. Inspired by self-supervised speech representation learning and discrete speech units, we propose a Unit-DSR system, which harnesses the powerful domain-adaptation capacity of HuBERT for training efficiency improvement and utilizes speech units to constrain the dysarthric content restoration in a discrete linguistic space. Compared with NED approaches, the Unit-DSR system only consists of a speech unit normalizer and a Unit HiFi-GAN vocoder, which is considerably simpler without cascaded sub-modules or auxiliary tasks. Results on the UASpeech corpus indicate that Unit-DSR outperforms competitive baselines in terms of content restoration, reaching a 28.2% relative average word error rate reduction when compared to original dysarthric speech, and shows robustness against speed perturbation and noise 1 .
doi_str_mv 10.1109/ICASSP48485.2024.10446921
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10446921</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10446921</ieee_id><sourcerecordid>10446921</sourcerecordid><originalsourceid>FETCH-LOGICAL-i721-f60f87012866c7dfcd2ec7847c1b9a9e917d67dd1ba4d5977ed2c1b7d62c3e193</originalsourceid><addsrcrecordid>eNo1kFFLwzAcxKMgOOe-gQ_xA3Tmn6RN4ptsTgdj6tqCbyNLUhdZ25HUh_rpreieDu5-HMchdAtkCkDU3XL2kOevXHKZTimhfAqE80xROEMTJZRkKWF8COEcjSgTKgFF3i_RVYyfhBApuByht3K9LJJ5vrnH8z7q0O2DNzg_Omf2eONM28QufJnOtw3O-9i5GpfRNx8npGx8h9dtqPXBf-tf7BpdVPoQ3eRfx6hYPBaz52T18jQsXiVeUEiqjFRSEKAyy4ywlbHUGSG5MLBTWjkFwmbCWthpblMlhLN0iAaTGuZAsTG6-av1zrntMfhah357eoD9ALFoUZE</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization</title><source>IEEE Xplore All Conference Series</source><creator>Wang, Yuejiao ; Wu, Xixin ; Wang, Disong ; Meng, Lingwei ; Meng, Helen</creator><creatorcontrib>Wang, Yuejiao ; Wu, Xixin ; Wang, Disong ; Meng, Lingwei ; Meng, Helen</creatorcontrib><description>Dysarthric speech reconstruction (DSR) systems aim to automatically convert dysarthric speech into normal-sounding speech. The technology eases communication with speakers affected by the neuromotor disorder and enhances their social inclusion. NED-based (Neural Encoder-Decoder) systems have significantly improved the intelligibility of the reconstructed speech as compared with GAN-based (Generative Adversarial Network) approaches, but the approach is still limited by training inefficiency caused by the cascaded pipeline and auxiliary tasks of the content encoder, which may in turn affect the quality of reconstruction. Inspired by self-supervised speech representation learning and discrete speech units, we propose a Unit-DSR system, which harnesses the powerful domain-adaptation capacity of HuBERT for training efficiency improvement and utilizes speech units to constrain the dysarthric content restoration in a discrete linguistic space. Compared with NED approaches, the Unit-DSR system only consists of a speech unit normalizer and a Unit HiFi-GAN vocoder, which is considerably simpler without cascaded sub-modules or auxiliary tasks. Results on the UASpeech corpus indicate that Unit-DSR outperforms competitive baselines in terms of content restoration, reaching a 28.2% relative average word error rate reduction when compared to original dysarthric speech, and shows robustness against speed perturbation and noise 1 .</description><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 9798350344851</identifier><identifier>DOI: 10.1109/ICASSP48485.2024.10446921</identifier><language>eng</language><publisher>IEEE</publisher><subject>Adaptation models ; dysarthric speech reconstruction ; Perturbation methods ; Pipelines ; Representation learning ; Signal processing ; speech normalization ; speech representation learning ; speech units ; Training ; Vocoders</subject><ispartof>ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, p.12306-12310</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10446921$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10446921$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Yuejiao</creatorcontrib><creatorcontrib>Wu, Xixin</creatorcontrib><creatorcontrib>Wang, Disong</creatorcontrib><creatorcontrib>Meng, Lingwei</creatorcontrib><creatorcontrib>Meng, Helen</creatorcontrib><title>UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization</title><title>ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title><addtitle>ICASSP</addtitle><description>Dysarthric speech reconstruction (DSR) systems aim to automatically convert dysarthric speech into normal-sounding speech. The technology eases communication with speakers affected by the neuromotor disorder and enhances their social inclusion. NED-based (Neural Encoder-Decoder) systems have significantly improved the intelligibility of the reconstructed speech as compared with GAN-based (Generative Adversarial Network) approaches, but the approach is still limited by training inefficiency caused by the cascaded pipeline and auxiliary tasks of the content encoder, which may in turn affect the quality of reconstruction. Inspired by self-supervised speech representation learning and discrete speech units, we propose a Unit-DSR system, which harnesses the powerful domain-adaptation capacity of HuBERT for training efficiency improvement and utilizes speech units to constrain the dysarthric content restoration in a discrete linguistic space. Compared with NED approaches, the Unit-DSR system only consists of a speech unit normalizer and a Unit HiFi-GAN vocoder, which is considerably simpler without cascaded sub-modules or auxiliary tasks. Results on the UASpeech corpus indicate that Unit-DSR outperforms competitive baselines in terms of content restoration, reaching a 28.2% relative average word error rate reduction when compared to original dysarthric speech, and shows robustness against speed perturbation and noise 1 .</description><subject>Adaptation models</subject><subject>dysarthric speech reconstruction</subject><subject>Perturbation methods</subject><subject>Pipelines</subject><subject>Representation learning</subject><subject>Signal processing</subject><subject>speech normalization</subject><subject>speech representation learning</subject><subject>speech units</subject><subject>Training</subject><subject>Vocoders</subject><issn>2379-190X</issn><isbn>9798350344851</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1kFFLwzAcxKMgOOe-gQ_xA3Tmn6RN4ptsTgdj6tqCbyNLUhdZ25HUh_rpreieDu5-HMchdAtkCkDU3XL2kOevXHKZTimhfAqE80xROEMTJZRkKWF8COEcjSgTKgFF3i_RVYyfhBApuByht3K9LJJ5vrnH8z7q0O2DNzg_Omf2eONM28QufJnOtw3O-9i5GpfRNx8npGx8h9dtqPXBf-tf7BpdVPoQ3eRfx6hYPBaz52T18jQsXiVeUEiqjFRSEKAyy4ywlbHUGSG5MLBTWjkFwmbCWthpblMlhLN0iAaTGuZAsTG6-av1zrntMfhah357eoD9ALFoUZE</recordid><startdate>20240414</startdate><enddate>20240414</enddate><creator>Wang, Yuejiao</creator><creator>Wu, Xixin</creator><creator>Wang, Disong</creator><creator>Meng, Lingwei</creator><creator>Meng, Helen</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240414</creationdate><title>UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization</title><author>Wang, Yuejiao ; Wu, Xixin ; Wang, Disong ; Meng, Lingwei ; Meng, Helen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i721-f60f87012866c7dfcd2ec7847c1b9a9e917d67dd1ba4d5977ed2c1b7d62c3e193</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation models</topic><topic>dysarthric speech reconstruction</topic><topic>Perturbation methods</topic><topic>Pipelines</topic><topic>Representation learning</topic><topic>Signal processing</topic><topic>speech normalization</topic><topic>speech representation learning</topic><topic>speech units</topic><topic>Training</topic><topic>Vocoders</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yuejiao</creatorcontrib><creatorcontrib>Wu, Xixin</creatorcontrib><creatorcontrib>Wang, Disong</creatorcontrib><creatorcontrib>Meng, Lingwei</creatorcontrib><creatorcontrib>Meng, Helen</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Yuejiao</au><au>Wu, Xixin</au><au>Wang, Disong</au><au>Meng, Lingwei</au><au>Meng, Helen</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization</atitle><btitle>ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</btitle><stitle>ICASSP</stitle><date>2024-04-14</date><risdate>2024</risdate><spage>12306</spage><epage>12310</epage><pages>12306-12310</pages><eissn>2379-190X</eissn><eisbn>9798350344851</eisbn><abstract>Dysarthric speech reconstruction (DSR) systems aim to automatically convert dysarthric speech into normal-sounding speech. The technology eases communication with speakers affected by the neuromotor disorder and enhances their social inclusion. NED-based (Neural Encoder-Decoder) systems have significantly improved the intelligibility of the reconstructed speech as compared with GAN-based (Generative Adversarial Network) approaches, but the approach is still limited by training inefficiency caused by the cascaded pipeline and auxiliary tasks of the content encoder, which may in turn affect the quality of reconstruction. Inspired by self-supervised speech representation learning and discrete speech units, we propose a Unit-DSR system, which harnesses the powerful domain-adaptation capacity of HuBERT for training efficiency improvement and utilizes speech units to constrain the dysarthric content restoration in a discrete linguistic space. Compared with NED approaches, the Unit-DSR system only consists of a speech unit normalizer and a Unit HiFi-GAN vocoder, which is considerably simpler without cascaded sub-modules or auxiliary tasks. Results on the UASpeech corpus indicate that Unit-DSR outperforms competitive baselines in terms of content restoration, reaching a 28.2% relative average word error rate reduction when compared to original dysarthric speech, and shows robustness against speed perturbation and noise 1 .</abstract><pub>IEEE</pub><doi>10.1109/ICASSP48485.2024.10446921</doi><tpages>5</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2379-190X
ispartof ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, p.12306-12310
issn 2379-190X
language eng
recordid cdi_ieee_primary_10446921
source IEEE Xplore All Conference Series
subjects Adaptation models
dysarthric speech reconstruction
Perturbation methods
Pipelines
Representation learning
Signal processing
speech normalization
speech representation learning
speech units
Training
Vocoders
title UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T21%3A54%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=UNIT-DSR:%20Dysarthric%20Speech%20Reconstruction%20System%20Using%20Speech%20Unit%20Normalization&rft.btitle=ICASSP%202024%20-%202024%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing%20(ICASSP)&rft.au=Wang,%20Yuejiao&rft.date=2024-04-14&rft.spage=12306&rft.epage=12310&rft.pages=12306-12310&rft.eissn=2379-190X&rft_id=info:doi/10.1109/ICASSP48485.2024.10446921&rft.eisbn=9798350344851&rft_dat=%3Cieee_CHZPO%3E10446921%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i721-f60f87012866c7dfcd2ec7847c1b9a9e917d67dd1ba4d5977ed2c1b7d62c3e193%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10446921&rfr_iscdi=true