Loading…

On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement

Many deep learning-based speech enhancement algorithms are designed to minimize the mean-square error (MSE) in some transform domain between a predicted and a target speech signal. However, optimizing for MSE does not necessarily guarantee high speech quality or intelligibility, which is the ultimat...

Full description

Saved in:
Bibliographic Details
Published in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2020, Vol.28, p.825-838
Main Authors: Kolbaek, Morten, Tan, Zheng-Hua, Jensen, Soren Holdt, Jensen, Jesper
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c361t-5bb11465cf00142b41bc211ff15293483ae4469b5dbc09864c903bc4d5eb281c3
cites cdi_FETCH-LOGICAL-c361t-5bb11465cf00142b41bc211ff15293483ae4469b5dbc09864c903bc4d5eb281c3
container_end_page 838
container_issue
container_start_page 825
container_title IEEE/ACM transactions on audio, speech, and language processing
container_volume 28
creator Kolbaek, Morten
Tan, Zheng-Hua
Jensen, Soren Holdt
Jensen, Jesper
description Many deep learning-based speech enhancement algorithms are designed to minimize the mean-square error (MSE) in some transform domain between a predicted and a target speech signal. However, optimizing for MSE does not necessarily guarantee high speech quality or intelligibility, which is the ultimate goal of many speech enhancement algorithms. Additionally, only little is known about the impact of the loss function on the emerging class of time-domain deep learning-based speech enhancement systems. We study how popular loss functions influence the performance of time-domain deep learning-based speech enhancement systems. First, we demonstrate that perceptually inspired loss functions might be advantageous over classical loss functions like MSE. Furthermore, we show that the learning rate is a crucial design parameter even for adaptive gradient-based optimizers, which has been generally overlooked in the literature. Also, we found that waveform matching performance metrics must be used with caution as they in certain situations can fail completely. Finally, we show that a loss function based on scale-invariant signal-to-distortion ratio (SI-SDR) achieves good general performance across a range of popular speech enhancement evaluation metrics, which suggests that SI-SDR is a good candidate as a general-purpose loss function for speech enhancement systems.
doi_str_mv 10.1109/TASLP.2020.2968738
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TASLP_2020_2968738</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8966946</ieee_id><sourcerecordid>2357221016</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-5bb11465cf00142b41bc211ff15293483ae4469b5dbc09864c903bc4d5eb281c3</originalsourceid><addsrcrecordid>eNo9kF1PwjAUhhujiQT5A3rTxOthT9t16yVBEJMZTMDrpi1dGGHdbDcT_71D0KtzLt7nfDwI3QOZAhD5tJ1tivcpJZRMqRR5xvIrNKKMykQywq__eirJLZrEeCCEAMmkzPgIrdYeF02MeNl721WNj7hsAt70rQtfVXQ7_NZ43Qd9xNuqdslzU-vK403rnN3jhd9rb13tfHeHbkp9jG5yqWP0sVxs56ukWL-8zmdFYpmALkmNAeAiteVwBKeGg7EUoCwhpZLxnGnHuZAm3RlLZC64lYQZy3epMzQHy8bo8Ty3Dc1n72KnDk0f_LBSUZZmlAIBMaToOWXD8FxwpWpDVevwrYCokzT1K02dpKmLtAF6OEOVc-4fyKUQkgv2A2MAZwY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2357221016</pqid></control><display><type>article</type><title>On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement</title><source>IEEE Electronic Library (IEL) Journals</source><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><creator>Kolbaek, Morten ; Tan, Zheng-Hua ; Jensen, Soren Holdt ; Jensen, Jesper</creator><creatorcontrib>Kolbaek, Morten ; Tan, Zheng-Hua ; Jensen, Soren Holdt ; Jensen, Jesper</creatorcontrib><description>Many deep learning-based speech enhancement algorithms are designed to minimize the mean-square error (MSE) in some transform domain between a predicted and a target speech signal. However, optimizing for MSE does not necessarily guarantee high speech quality or intelligibility, which is the ultimate goal of many speech enhancement algorithms. Additionally, only little is known about the impact of the loss function on the emerging class of time-domain deep learning-based speech enhancement systems. We study how popular loss functions influence the performance of time-domain deep learning-based speech enhancement systems. First, we demonstrate that perceptually inspired loss functions might be advantageous over classical loss functions like MSE. Furthermore, we show that the learning rate is a crucial design parameter even for adaptive gradient-based optimizers, which has been generally overlooked in the literature. Also, we found that waveform matching performance metrics must be used with caution as they in certain situations can fail completely. Finally, we show that a loss function based on scale-invariant signal-to-distortion ratio (SI-SDR) achieves good general performance across a range of popular speech enhancement evaluation metrics, which suggests that SI-SDR is a good candidate as a general-purpose loss function for speech enhancement systems.</description><identifier>ISSN: 2329-9290</identifier><identifier>EISSN: 2329-9304</identifier><identifier>DOI: 10.1109/TASLP.2020.2968738</identifier><identifier>CODEN: ITASFA</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Deep learning ; Design optimization ; Design parameters ; fully convolutional neural networks ; Intelligibility ; Machine learning ; Mean square error methods ; Noise measurement ; objective intelligibility ; Performance measurement ; Speech ; Speech enhancement ; Speech processing ; Time domain analysis ; time-domain ; Training ; Waveforms</subject><ispartof>IEEE/ACM transactions on audio, speech, and language processing, 2020, Vol.28, p.825-838</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-5bb11465cf00142b41bc211ff15293483ae4469b5dbc09864c903bc4d5eb281c3</citedby><cites>FETCH-LOGICAL-c361t-5bb11465cf00142b41bc211ff15293483ae4469b5dbc09864c903bc4d5eb281c3</cites><orcidid>0000-0001-6856-8928 ; 0000-0002-2561-4960</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8966946$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4024,27923,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Kolbaek, Morten</creatorcontrib><creatorcontrib>Tan, Zheng-Hua</creatorcontrib><creatorcontrib>Jensen, Soren Holdt</creatorcontrib><creatorcontrib>Jensen, Jesper</creatorcontrib><title>On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement</title><title>IEEE/ACM transactions on audio, speech, and language processing</title><addtitle>TASLP</addtitle><description>Many deep learning-based speech enhancement algorithms are designed to minimize the mean-square error (MSE) in some transform domain between a predicted and a target speech signal. However, optimizing for MSE does not necessarily guarantee high speech quality or intelligibility, which is the ultimate goal of many speech enhancement algorithms. Additionally, only little is known about the impact of the loss function on the emerging class of time-domain deep learning-based speech enhancement systems. We study how popular loss functions influence the performance of time-domain deep learning-based speech enhancement systems. First, we demonstrate that perceptually inspired loss functions might be advantageous over classical loss functions like MSE. Furthermore, we show that the learning rate is a crucial design parameter even for adaptive gradient-based optimizers, which has been generally overlooked in the literature. Also, we found that waveform matching performance metrics must be used with caution as they in certain situations can fail completely. Finally, we show that a loss function based on scale-invariant signal-to-distortion ratio (SI-SDR) achieves good general performance across a range of popular speech enhancement evaluation metrics, which suggests that SI-SDR is a good candidate as a general-purpose loss function for speech enhancement systems.</description><subject>Algorithms</subject><subject>Deep learning</subject><subject>Design optimization</subject><subject>Design parameters</subject><subject>fully convolutional neural networks</subject><subject>Intelligibility</subject><subject>Machine learning</subject><subject>Mean square error methods</subject><subject>Noise measurement</subject><subject>objective intelligibility</subject><subject>Performance measurement</subject><subject>Speech</subject><subject>Speech enhancement</subject><subject>Speech processing</subject><subject>Time domain analysis</subject><subject>time-domain</subject><subject>Training</subject><subject>Waveforms</subject><issn>2329-9290</issn><issn>2329-9304</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNo9kF1PwjAUhhujiQT5A3rTxOthT9t16yVBEJMZTMDrpi1dGGHdbDcT_71D0KtzLt7nfDwI3QOZAhD5tJ1tivcpJZRMqRR5xvIrNKKMykQywq__eirJLZrEeCCEAMmkzPgIrdYeF02MeNl721WNj7hsAt70rQtfVXQ7_NZ43Qd9xNuqdslzU-vK403rnN3jhd9rb13tfHeHbkp9jG5yqWP0sVxs56ukWL-8zmdFYpmALkmNAeAiteVwBKeGg7EUoCwhpZLxnGnHuZAm3RlLZC64lYQZy3epMzQHy8bo8Ty3Dc1n72KnDk0f_LBSUZZmlAIBMaToOWXD8FxwpWpDVevwrYCokzT1K02dpKmLtAF6OEOVc-4fyKUQkgv2A2MAZwY</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Kolbaek, Morten</creator><creator>Tan, Zheng-Hua</creator><creator>Jensen, Soren Holdt</creator><creator>Jensen, Jesper</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6856-8928</orcidid><orcidid>https://orcid.org/0000-0002-2561-4960</orcidid></search><sort><creationdate>2020</creationdate><title>On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement</title><author>Kolbaek, Morten ; Tan, Zheng-Hua ; Jensen, Soren Holdt ; Jensen, Jesper</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-5bb11465cf00142b41bc211ff15293483ae4469b5dbc09864c903bc4d5eb281c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Deep learning</topic><topic>Design optimization</topic><topic>Design parameters</topic><topic>fully convolutional neural networks</topic><topic>Intelligibility</topic><topic>Machine learning</topic><topic>Mean square error methods</topic><topic>Noise measurement</topic><topic>objective intelligibility</topic><topic>Performance measurement</topic><topic>Speech</topic><topic>Speech enhancement</topic><topic>Speech processing</topic><topic>Time domain analysis</topic><topic>time-domain</topic><topic>Training</topic><topic>Waveforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kolbaek, Morten</creatorcontrib><creatorcontrib>Tan, Zheng-Hua</creatorcontrib><creatorcontrib>Jensen, Soren Holdt</creatorcontrib><creatorcontrib>Jensen, Jesper</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kolbaek, Morten</au><au>Tan, Zheng-Hua</au><au>Jensen, Soren Holdt</au><au>Jensen, Jesper</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement</atitle><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle><stitle>TASLP</stitle><date>2020</date><risdate>2020</risdate><volume>28</volume><spage>825</spage><epage>838</epage><pages>825-838</pages><issn>2329-9290</issn><eissn>2329-9304</eissn><coden>ITASFA</coden><abstract>Many deep learning-based speech enhancement algorithms are designed to minimize the mean-square error (MSE) in some transform domain between a predicted and a target speech signal. However, optimizing for MSE does not necessarily guarantee high speech quality or intelligibility, which is the ultimate goal of many speech enhancement algorithms. Additionally, only little is known about the impact of the loss function on the emerging class of time-domain deep learning-based speech enhancement systems. We study how popular loss functions influence the performance of time-domain deep learning-based speech enhancement systems. First, we demonstrate that perceptually inspired loss functions might be advantageous over classical loss functions like MSE. Furthermore, we show that the learning rate is a crucial design parameter even for adaptive gradient-based optimizers, which has been generally overlooked in the literature. Also, we found that waveform matching performance metrics must be used with caution as they in certain situations can fail completely. Finally, we show that a loss function based on scale-invariant signal-to-distortion ratio (SI-SDR) achieves good general performance across a range of popular speech enhancement evaluation metrics, which suggests that SI-SDR is a good candidate as a general-purpose loss function for speech enhancement systems.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TASLP.2020.2968738</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-6856-8928</orcidid><orcidid>https://orcid.org/0000-0002-2561-4960</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2329-9290
ispartof IEEE/ACM transactions on audio, speech, and language processing, 2020, Vol.28, p.825-838
issn 2329-9290
2329-9304
language eng
recordid cdi_crossref_primary_10_1109_TASLP_2020_2968738
source IEEE Electronic Library (IEL) Journals; Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)
subjects Algorithms
Deep learning
Design optimization
Design parameters
fully convolutional neural networks
Intelligibility
Machine learning
Mean square error methods
Noise measurement
objective intelligibility
Performance measurement
Speech
Speech enhancement
Speech processing
Time domain analysis
time-domain
Training
Waveforms
title On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A19%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20Loss%20Functions%20for%20Supervised%20Monaural%20Time-Domain%20Speech%20Enhancement&rft.jtitle=IEEE/ACM%20transactions%20on%20audio,%20speech,%20and%20language%20processing&rft.au=Kolbaek,%20Morten&rft.date=2020&rft.volume=28&rft.spage=825&rft.epage=838&rft.pages=825-838&rft.issn=2329-9290&rft.eissn=2329-9304&rft.coden=ITASFA&rft_id=info:doi/10.1109/TASLP.2020.2968738&rft_dat=%3Cproquest_cross%3E2357221016%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c361t-5bb11465cf00142b41bc211ff15293483ae4469b5dbc09864c903bc4d5eb281c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2357221016&rft_id=info:pmid/&rft_ieee_id=8966946&rfr_iscdi=true