Loading…
Self-Supervised Pretraining for Robust Personalized Voice Activity Detection in Adverse Conditions
In this paper, we propose the use of self-supervised pretraining on a large unlabelled data set to improve the performance of a personalized voice activity detection (VAD) model in adverse conditions. We pretrain a long short-term memory (LSTM)-encoder using the autoregressive predictive coding (APC...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 10130 |
container_issue | |
container_start_page | 10126 |
container_title | |
container_volume | |
creator | Bovbjerg, Holger Severin Jensen, Jesper Ostergaard, Jan Tan, Zheng-Hua |
description | In this paper, we propose the use of self-supervised pretraining on a large unlabelled data set to improve the performance of a personalized voice activity detection (VAD) model in adverse conditions. We pretrain a long short-term memory (LSTM)-encoder using the autoregressive predictive coding (APC) framework and fine-tune it for personalized VAD. We also propose a denoising variant of APC, with the goal of improving the robustness of personalized VAD. The trained models are systematically evaluated on both clean speech and speech contaminated by various types of noise at different SNR-levels and compared to a purely supervised model. Our experiments show that self-supervised pretraining not only improves performance in clean conditions, but also yields models which are more robust to adverse conditions compared to purely supervised learning. |
doi_str_mv | 10.1109/ICASSP48485.2024.10447653 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10447653</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10447653</ieee_id><sourcerecordid>10447653</sourcerecordid><originalsourceid>FETCH-LOGICAL-i723-32cbe6f88aa44dd3a4d6e0714d8dca8287b53857433623436a7cb9baf206eae13</originalsourceid><addsrcrecordid>eNo1UG1LwzAYjILgnPsHfog_oDPJ87RJP476CgOLHeK3kTZPJTLbkXSF-eutqJ_uuDsO7hi7lmIppchvnopVVZVo0KRLJRQupUDUWQonbJHr3EAqACdTnrKZAp0nMhdv5-wixg8hhNFoZqyuaNcm1WFPYfSRHC8DDcH6znfvvO0Df-nrQxx4SSH2nd35rynz2vuG-KoZ_OiHI7-lgSbed9x3fOXGKUq86Dvnf8R4yc5au4u0-MM529zfbYrHZP38ME1YJ14rSEA1NWWtMdYiOgcWXUZCS3TGNdYoo-sUTKoRIFOAkFnd1HltWyUysiRhzq5-az0RbffBf9pw3P5fAt8DJFiA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Self-Supervised Pretraining for Robust Personalized Voice Activity Detection in Adverse Conditions</title><source>IEEE Xplore All Conference Series</source><creator>Bovbjerg, Holger Severin ; Jensen, Jesper ; Ostergaard, Jan ; Tan, Zheng-Hua</creator><creatorcontrib>Bovbjerg, Holger Severin ; Jensen, Jesper ; Ostergaard, Jan ; Tan, Zheng-Hua</creatorcontrib><description>In this paper, we propose the use of self-supervised pretraining on a large unlabelled data set to improve the performance of a personalized voice activity detection (VAD) model in adverse conditions. We pretrain a long short-term memory (LSTM)-encoder using the autoregressive predictive coding (APC) framework and fine-tune it for personalized VAD. We also propose a denoising variant of APC, with the goal of improving the robustness of personalized VAD. The trained models are systematically evaluated on both clean speech and speech contaminated by various types of noise at different SNR-levels and compared to a purely supervised model. Our experiments show that self-supervised pretraining not only improves performance in clean conditions, but also yields models which are more robust to adverse conditions compared to purely supervised learning.</description><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 9798350344851</identifier><identifier>DOI: 10.1109/ICASSP48485.2024.10447653</identifier><language>eng</language><publisher>IEEE</publisher><subject>Deep Learning ; Noise reduction ; Predictive coding ; Robustness ; Self-Supervised Learning ; Signal processing ; Supervised learning ; Target Speaker ; Training ; Voice activity detection</subject><ispartof>Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998), 2024, p.10126-10130</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10447653$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10447653$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Bovbjerg, Holger Severin</creatorcontrib><creatorcontrib>Jensen, Jesper</creatorcontrib><creatorcontrib>Ostergaard, Jan</creatorcontrib><creatorcontrib>Tan, Zheng-Hua</creatorcontrib><title>Self-Supervised Pretraining for Robust Personalized Voice Activity Detection in Adverse Conditions</title><title>Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998)</title><addtitle>ICASSP</addtitle><description>In this paper, we propose the use of self-supervised pretraining on a large unlabelled data set to improve the performance of a personalized voice activity detection (VAD) model in adverse conditions. We pretrain a long short-term memory (LSTM)-encoder using the autoregressive predictive coding (APC) framework and fine-tune it for personalized VAD. We also propose a denoising variant of APC, with the goal of improving the robustness of personalized VAD. The trained models are systematically evaluated on both clean speech and speech contaminated by various types of noise at different SNR-levels and compared to a purely supervised model. Our experiments show that self-supervised pretraining not only improves performance in clean conditions, but also yields models which are more robust to adverse conditions compared to purely supervised learning.</description><subject>Deep Learning</subject><subject>Noise reduction</subject><subject>Predictive coding</subject><subject>Robustness</subject><subject>Self-Supervised Learning</subject><subject>Signal processing</subject><subject>Supervised learning</subject><subject>Target Speaker</subject><subject>Training</subject><subject>Voice activity detection</subject><issn>2379-190X</issn><isbn>9798350344851</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1UG1LwzAYjILgnPsHfog_oDPJ87RJP476CgOLHeK3kTZPJTLbkXSF-eutqJ_uuDsO7hi7lmIppchvnopVVZVo0KRLJRQupUDUWQonbJHr3EAqACdTnrKZAp0nMhdv5-wixg8hhNFoZqyuaNcm1WFPYfSRHC8DDcH6znfvvO0Df-nrQxx4SSH2nd35rynz2vuG-KoZ_OiHI7-lgSbed9x3fOXGKUq86Dvnf8R4yc5au4u0-MM529zfbYrHZP38ME1YJ14rSEA1NWWtMdYiOgcWXUZCS3TGNdYoo-sUTKoRIFOAkFnd1HltWyUysiRhzq5-az0RbffBf9pw3P5fAt8DJFiA</recordid><startdate>20240414</startdate><enddate>20240414</enddate><creator>Bovbjerg, Holger Severin</creator><creator>Jensen, Jesper</creator><creator>Ostergaard, Jan</creator><creator>Tan, Zheng-Hua</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240414</creationdate><title>Self-Supervised Pretraining for Robust Personalized Voice Activity Detection in Adverse Conditions</title><author>Bovbjerg, Holger Severin ; Jensen, Jesper ; Ostergaard, Jan ; Tan, Zheng-Hua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i723-32cbe6f88aa44dd3a4d6e0714d8dca8287b53857433623436a7cb9baf206eae13</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Deep Learning</topic><topic>Noise reduction</topic><topic>Predictive coding</topic><topic>Robustness</topic><topic>Self-Supervised Learning</topic><topic>Signal processing</topic><topic>Supervised learning</topic><topic>Target Speaker</topic><topic>Training</topic><topic>Voice activity detection</topic><toplevel>online_resources</toplevel><creatorcontrib>Bovbjerg, Holger Severin</creatorcontrib><creatorcontrib>Jensen, Jesper</creatorcontrib><creatorcontrib>Ostergaard, Jan</creatorcontrib><creatorcontrib>Tan, Zheng-Hua</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bovbjerg, Holger Severin</au><au>Jensen, Jesper</au><au>Ostergaard, Jan</au><au>Tan, Zheng-Hua</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Self-Supervised Pretraining for Robust Personalized Voice Activity Detection in Adverse Conditions</atitle><btitle>Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998)</btitle><stitle>ICASSP</stitle><date>2024-04-14</date><risdate>2024</risdate><spage>10126</spage><epage>10130</epage><pages>10126-10130</pages><eissn>2379-190X</eissn><eisbn>9798350344851</eisbn><abstract>In this paper, we propose the use of self-supervised pretraining on a large unlabelled data set to improve the performance of a personalized voice activity detection (VAD) model in adverse conditions. We pretrain a long short-term memory (LSTM)-encoder using the autoregressive predictive coding (APC) framework and fine-tune it for personalized VAD. We also propose a denoising variant of APC, with the goal of improving the robustness of personalized VAD. The trained models are systematically evaluated on both clean speech and speech contaminated by various types of noise at different SNR-levels and compared to a purely supervised model. Our experiments show that self-supervised pretraining not only improves performance in clean conditions, but also yields models which are more robust to adverse conditions compared to purely supervised learning.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP48485.2024.10447653</doi><tpages>5</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2379-190X |
ispartof | Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998), 2024, p.10126-10130 |
issn | 2379-190X |
language | eng |
recordid | cdi_ieee_primary_10447653 |
source | IEEE Xplore All Conference Series |
subjects | Deep Learning Noise reduction Predictive coding Robustness Self-Supervised Learning Signal processing Supervised learning Target Speaker Training Voice activity detection |
title | Self-Supervised Pretraining for Robust Personalized Voice Activity Detection in Adverse Conditions |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T05%3A52%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Self-Supervised%20Pretraining%20for%20Robust%20Personalized%20Voice%20Activity%20Detection%20in%20Adverse%20Conditions&rft.btitle=Proceedings%20of%20the%20...%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing%20(1998)&rft.au=Bovbjerg,%20Holger%20Severin&rft.date=2024-04-14&rft.spage=10126&rft.epage=10130&rft.pages=10126-10130&rft.eissn=2379-190X&rft_id=info:doi/10.1109/ICASSP48485.2024.10447653&rft.eisbn=9798350344851&rft_dat=%3Cieee_CHZPO%3E10447653%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i723-32cbe6f88aa44dd3a4d6e0714d8dca8287b53857433623436a7cb9baf206eae13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10447653&rfr_iscdi=true |