Loading…
Evaluation of Deep-Learning-Based Voice Activity Detectors and Room Impulse Response Models in Reverberant Environments
State-of-the-art deep-learning-based voice activity detectors (VADs) are often trained with anechoic data. However, real acoustic environments are generally reverberant, which causes the performance to significantly deteriorate. To mitigate this mismatch between training data and real data, we simul...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 410 |
container_issue | |
container_start_page | 406 |
container_title | |
container_volume | |
creator | Ivry, Amir Cohen, Israel Berdugo, Baruch |
description | State-of-the-art deep-learning-based voice activity detectors (VADs) are often trained with anechoic data. However, real acoustic environments are generally reverberant, which causes the performance to significantly deteriorate. To mitigate this mismatch between training data and real data, we simulate an augmented training set that contains nearly five million utterances. This extension comprises of anechoic utterances and their reverberant modifications, generated by convolutions of the anechoic utterances with a variety of room impulse responses (RIRs). We consider five different models to generate RIRs, and five different VADs that are trained with the augmented training set. We test all trained systems in three different real reverberant environments. Experimental results show 20% increase on average in accuracy, precision and recall for all detectors and response models, compared to anechoic training. Furthermore, one of the RIR models consistently yields better performance than the other models, for all the tested VADs. Additionally, one of the VADs consistently outperformed the other VADs in all experiments. |
doi_str_mv | 10.1109/ICASSP40776.2020.9054610 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9054610</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9054610</ieee_id><sourcerecordid>9054610</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-116b91f50d49fac683d393becaceb6d9c7b35b6ccf732d87d829a8c274685d9f3</originalsourceid><addsrcrecordid>eNotUF9LwzAcjILgnPsEvuQLdCZNmz-Pc04dTJRNxbeRJr9KZE1KEiv79hbc0x3H3cEdQpiSOaVE3a6Xi93utSJC8HlJSjJXpK44JWdopoSkNVGEc0brczQpmVAFVeTzEl2l9E0IkaKSE_S7GvThR2cXPA4tvgfoiw3o6J3_Ku50Aos_gjOAFya7weXjaMlgcogJa2_xNoQOr7v-55AAbyH1wY_kOVg4JOz8KA0QG4jaZ7zyg4vBd-BzukYXrR4zsxNO0fvD6m35VGxeHsdZm8KVhOWCUt4o2tbEVqrVhktmmWINGG2g4VYZ0bC64ca0gpVWCitLpaUpRcVlbVXLpujmv9cBwL6PrtPxuD_9xP4A9uZgOQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Evaluation of Deep-Learning-Based Voice Activity Detectors and Room Impulse Response Models in Reverberant Environments</title><source>IEEE Xplore All Conference Series</source><creator>Ivry, Amir ; Cohen, Israel ; Berdugo, Baruch</creator><creatorcontrib>Ivry, Amir ; Cohen, Israel ; Berdugo, Baruch</creatorcontrib><description>State-of-the-art deep-learning-based voice activity detectors (VADs) are often trained with anechoic data. However, real acoustic environments are generally reverberant, which causes the performance to significantly deteriorate. To mitigate this mismatch between training data and real data, we simulate an augmented training set that contains nearly five million utterances. This extension comprises of anechoic utterances and their reverberant modifications, generated by convolutions of the anechoic utterances with a variety of room impulse responses (RIRs). We consider five different models to generate RIRs, and five different VADs that are trained with the augmented training set. We test all trained systems in three different real reverberant environments. Experimental results show 20% increase on average in accuracy, precision and recall for all detectors and response models, compared to anechoic training. Furthermore, one of the RIR models consistently yields better performance than the other models, for all the tested VADs. Additionally, one of the VADs consistently outperformed the other VADs in all experiments.</description><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 9781509066315</identifier><identifier>EISBN: 1509066314</identifier><identifier>DOI: 10.1109/ICASSP40776.2020.9054610</identifier><language>eng</language><publisher>IEEE</publisher><subject>Acoustics ; deep learning ; Detectors ; Feature extraction ; Libraries ; reverberation ; room impulse response ; Speech processing ; Training ; Training data ; Voice activity detection</subject><ispartof>ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, p.406-410</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9054610$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27902,54530,54907</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9054610$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ivry, Amir</creatorcontrib><creatorcontrib>Cohen, Israel</creatorcontrib><creatorcontrib>Berdugo, Baruch</creatorcontrib><title>Evaluation of Deep-Learning-Based Voice Activity Detectors and Room Impulse Response Models in Reverberant Environments</title><title>ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title><addtitle>ICASSP</addtitle><description>State-of-the-art deep-learning-based voice activity detectors (VADs) are often trained with anechoic data. However, real acoustic environments are generally reverberant, which causes the performance to significantly deteriorate. To mitigate this mismatch between training data and real data, we simulate an augmented training set that contains nearly five million utterances. This extension comprises of anechoic utterances and their reverberant modifications, generated by convolutions of the anechoic utterances with a variety of room impulse responses (RIRs). We consider five different models to generate RIRs, and five different VADs that are trained with the augmented training set. We test all trained systems in three different real reverberant environments. Experimental results show 20% increase on average in accuracy, precision and recall for all detectors and response models, compared to anechoic training. Furthermore, one of the RIR models consistently yields better performance than the other models, for all the tested VADs. Additionally, one of the VADs consistently outperformed the other VADs in all experiments.</description><subject>Acoustics</subject><subject>deep learning</subject><subject>Detectors</subject><subject>Feature extraction</subject><subject>Libraries</subject><subject>reverberation</subject><subject>room impulse response</subject><subject>Speech processing</subject><subject>Training</subject><subject>Training data</subject><subject>Voice activity detection</subject><issn>2379-190X</issn><isbn>9781509066315</isbn><isbn>1509066314</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotUF9LwzAcjILgnPsEvuQLdCZNmz-Pc04dTJRNxbeRJr9KZE1KEiv79hbc0x3H3cEdQpiSOaVE3a6Xi93utSJC8HlJSjJXpK44JWdopoSkNVGEc0brczQpmVAFVeTzEl2l9E0IkaKSE_S7GvThR2cXPA4tvgfoiw3o6J3_Ku50Aos_gjOAFya7weXjaMlgcogJa2_xNoQOr7v-55AAbyH1wY_kOVg4JOz8KA0QG4jaZ7zyg4vBd-BzukYXrR4zsxNO0fvD6m35VGxeHsdZm8KVhOWCUt4o2tbEVqrVhktmmWINGG2g4VYZ0bC64ca0gpVWCitLpaUpRcVlbVXLpujmv9cBwL6PrtPxuD_9xP4A9uZgOQ</recordid><startdate>202005</startdate><enddate>202005</enddate><creator>Ivry, Amir</creator><creator>Cohen, Israel</creator><creator>Berdugo, Baruch</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>202005</creationdate><title>Evaluation of Deep-Learning-Based Voice Activity Detectors and Room Impulse Response Models in Reverberant Environments</title><author>Ivry, Amir ; Cohen, Israel ; Berdugo, Baruch</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-116b91f50d49fac683d393becaceb6d9c7b35b6ccf732d87d829a8c274685d9f3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Acoustics</topic><topic>deep learning</topic><topic>Detectors</topic><topic>Feature extraction</topic><topic>Libraries</topic><topic>reverberation</topic><topic>room impulse response</topic><topic>Speech processing</topic><topic>Training</topic><topic>Training data</topic><topic>Voice activity detection</topic><toplevel>online_resources</toplevel><creatorcontrib>Ivry, Amir</creatorcontrib><creatorcontrib>Cohen, Israel</creatorcontrib><creatorcontrib>Berdugo, Baruch</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ivry, Amir</au><au>Cohen, Israel</au><au>Berdugo, Baruch</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Evaluation of Deep-Learning-Based Voice Activity Detectors and Room Impulse Response Models in Reverberant Environments</atitle><btitle>ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</btitle><stitle>ICASSP</stitle><date>2020-05</date><risdate>2020</risdate><spage>406</spage><epage>410</epage><pages>406-410</pages><eissn>2379-190X</eissn><eisbn>9781509066315</eisbn><eisbn>1509066314</eisbn><abstract>State-of-the-art deep-learning-based voice activity detectors (VADs) are often trained with anechoic data. However, real acoustic environments are generally reverberant, which causes the performance to significantly deteriorate. To mitigate this mismatch between training data and real data, we simulate an augmented training set that contains nearly five million utterances. This extension comprises of anechoic utterances and their reverberant modifications, generated by convolutions of the anechoic utterances with a variety of room impulse responses (RIRs). We consider five different models to generate RIRs, and five different VADs that are trained with the augmented training set. We test all trained systems in three different real reverberant environments. Experimental results show 20% increase on average in accuracy, precision and recall for all detectors and response models, compared to anechoic training. Furthermore, one of the RIR models consistently yields better performance than the other models, for all the tested VADs. Additionally, one of the VADs consistently outperformed the other VADs in all experiments.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP40776.2020.9054610</doi><tpages>5</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2379-190X |
ispartof | ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, p.406-410 |
issn | 2379-190X |
language | eng |
recordid | cdi_ieee_primary_9054610 |
source | IEEE Xplore All Conference Series |
subjects | Acoustics deep learning Detectors Feature extraction Libraries reverberation room impulse response Speech processing Training Training data Voice activity detection |
title | Evaluation of Deep-Learning-Based Voice Activity Detectors and Room Impulse Response Models in Reverberant Environments |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T00%3A11%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Evaluation%20of%20Deep-Learning-Based%20Voice%20Activity%20Detectors%20and%20Room%20Impulse%20Response%20Models%20in%20Reverberant%20Environments&rft.btitle=ICASSP%202020%20-%202020%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing%20(ICASSP)&rft.au=Ivry,%20Amir&rft.date=2020-05&rft.spage=406&rft.epage=410&rft.pages=406-410&rft.eissn=2379-190X&rft_id=info:doi/10.1109/ICASSP40776.2020.9054610&rft.eisbn=9781509066315&rft.eisbn_list=1509066314&rft_dat=%3Cieee_CHZPO%3E9054610%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-116b91f50d49fac683d393becaceb6d9c7b35b6ccf732d87d829a8c274685d9f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9054610&rfr_iscdi=true |