Loading…

Voice Activity Detection for Transient Noisy Environment Based on Diffusion Nets

We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done th...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-06
Main Authors: Ivry, Amir, Berdugo, Baruch, Cohen, Israel
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Ivry, Amir
Berdugo, Baruch
Cohen, Israel
description We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done through a deep encoder-decoder based neural network architecture. This structure involves an encoder that maps spectral features with temporal information to their low-dimensional representations, which are generated by applying the diffusion maps method. The encoder feeds a decoder that maps the embedded data back into the high-dimensional space. A deep neural network, which is trained to separate speech from non-speech frames, is obtained by concatenating the decoder to the encoder, resembling the known Diffusion nets architecture. Experimental results show enhanced performance compared to competing voice activity detection methods. The improvement is achieved in both accuracy, robustness and generalization ability. Our model performs in a real-time manner and can be integrated into audio-based communication systems. We also present a batch algorithm which obtains an even higher accuracy for off-line applications.
doi_str_mv 10.48550/arxiv.2106.13763
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2545775938</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2545775938</sourcerecordid><originalsourceid>FETCH-LOGICAL-a528-73aafebdc20b21eec64992e16431be663f2aedc8104f38bf4c1682c63a8065953</originalsourceid><addsrcrecordid>eNotjUFLAzEUhIMgWGp_gLeA512T95Js9ljbaoVSPRSvJZt9gRTd6Ga72H9vi55mGL6ZYexOilJZrcWD63_iWIIUppRYGbxiE0CUhVUAN2yW80EIAaYCrXHC3t5T9MTnfohjHE58SQOdfep4SD3f9a7LkbqBb1PMJ77qxtin7vOSPLpMLT-DyxjCMV8qWxryLbsO7iPT7F-nbPe02i3Wxeb1-WUx3xROgy0qdC5Q03oQDUgib1RdA0mjUDZkDAZw1HorhQpom6C8NBa8QWeF0bXGKbv_m_3q0_eR8rA_pGPfnR_3oJWuKl2jxV8nTVCl</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2545775938</pqid></control><display><type>article</type><title>Voice Activity Detection for Transient Noisy Environment Based on Diffusion Nets</title><source>Publicly Available Content (ProQuest)</source><creator>Ivry, Amir ; Berdugo, Baruch ; Cohen, Israel</creator><creatorcontrib>Ivry, Amir ; Berdugo, Baruch ; Cohen, Israel</creatorcontrib><description>We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done through a deep encoder-decoder based neural network architecture. This structure involves an encoder that maps spectral features with temporal information to their low-dimensional representations, which are generated by applying the diffusion maps method. The encoder feeds a decoder that maps the embedded data back into the high-dimensional space. A deep neural network, which is trained to separate speech from non-speech frames, is obtained by concatenating the decoder to the encoder, resembling the known Diffusion nets architecture. Experimental results show enhanced performance compared to competing voice activity detection methods. The improvement is achieved in both accuracy, robustness and generalization ability. Our model performs in a real-time manner and can be integrated into audio-based communication systems. We also present a batch algorithm which obtains an even higher accuracy for off-line applications.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2106.13763</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Artificial neural networks ; Coders ; Communications systems ; Computer architecture ; Diffusion ; Encoders-Decoders ; Machine learning ; Model accuracy ; Neural networks ; Speech ; Voice activity detectors ; Voice communication ; Voice recognition</subject><ispartof>arXiv.org, 2021-06</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2545775938?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,27902,36989,44566</link.rule.ids></links><search><creatorcontrib>Ivry, Amir</creatorcontrib><creatorcontrib>Berdugo, Baruch</creatorcontrib><creatorcontrib>Cohen, Israel</creatorcontrib><title>Voice Activity Detection for Transient Noisy Environment Based on Diffusion Nets</title><title>arXiv.org</title><description>We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done through a deep encoder-decoder based neural network architecture. This structure involves an encoder that maps spectral features with temporal information to their low-dimensional representations, which are generated by applying the diffusion maps method. The encoder feeds a decoder that maps the embedded data back into the high-dimensional space. A deep neural network, which is trained to separate speech from non-speech frames, is obtained by concatenating the decoder to the encoder, resembling the known Diffusion nets architecture. Experimental results show enhanced performance compared to competing voice activity detection methods. The improvement is achieved in both accuracy, robustness and generalization ability. Our model performs in a real-time manner and can be integrated into audio-based communication systems. We also present a batch algorithm which obtains an even higher accuracy for off-line applications.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Coders</subject><subject>Communications systems</subject><subject>Computer architecture</subject><subject>Diffusion</subject><subject>Encoders-Decoders</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Speech</subject><subject>Voice activity detectors</subject><subject>Voice communication</subject><subject>Voice recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjUFLAzEUhIMgWGp_gLeA512T95Js9ljbaoVSPRSvJZt9gRTd6Ga72H9vi55mGL6ZYexOilJZrcWD63_iWIIUppRYGbxiE0CUhVUAN2yW80EIAaYCrXHC3t5T9MTnfohjHE58SQOdfep4SD3f9a7LkbqBb1PMJ77qxtin7vOSPLpMLT-DyxjCMV8qWxryLbsO7iPT7F-nbPe02i3Wxeb1-WUx3xROgy0qdC5Q03oQDUgib1RdA0mjUDZkDAZw1HorhQpom6C8NBa8QWeF0bXGKbv_m_3q0_eR8rA_pGPfnR_3oJWuKl2jxV8nTVCl</recordid><startdate>20210625</startdate><enddate>20210625</enddate><creator>Ivry, Amir</creator><creator>Berdugo, Baruch</creator><creator>Cohen, Israel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210625</creationdate><title>Voice Activity Detection for Transient Noisy Environment Based on Diffusion Nets</title><author>Ivry, Amir ; Berdugo, Baruch ; Cohen, Israel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a528-73aafebdc20b21eec64992e16431be663f2aedc8104f38bf4c1682c63a8065953</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Coders</topic><topic>Communications systems</topic><topic>Computer architecture</topic><topic>Diffusion</topic><topic>Encoders-Decoders</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Speech</topic><topic>Voice activity detectors</topic><topic>Voice communication</topic><topic>Voice recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Ivry, Amir</creatorcontrib><creatorcontrib>Berdugo, Baruch</creatorcontrib><creatorcontrib>Cohen, Israel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ivry, Amir</au><au>Berdugo, Baruch</au><au>Cohen, Israel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Voice Activity Detection for Transient Noisy Environment Based on Diffusion Nets</atitle><jtitle>arXiv.org</jtitle><date>2021-06-25</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done through a deep encoder-decoder based neural network architecture. This structure involves an encoder that maps spectral features with temporal information to their low-dimensional representations, which are generated by applying the diffusion maps method. The encoder feeds a decoder that maps the embedded data back into the high-dimensional space. A deep neural network, which is trained to separate speech from non-speech frames, is obtained by concatenating the decoder to the encoder, resembling the known Diffusion nets architecture. Experimental results show enhanced performance compared to competing voice activity detection methods. The improvement is achieved in both accuracy, robustness and generalization ability. Our model performs in a real-time manner and can be integrated into audio-based communication systems. We also present a batch algorithm which obtains an even higher accuracy for off-line applications.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2106.13763</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2545775938
source Publicly Available Content (ProQuest)
subjects Algorithms
Artificial neural networks
Coders
Communications systems
Computer architecture
Diffusion
Encoders-Decoders
Machine learning
Model accuracy
Neural networks
Speech
Voice activity detectors
Voice communication
Voice recognition
title Voice Activity Detection for Transient Noisy Environment Based on Diffusion Nets
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T00%3A04%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Voice%20Activity%20Detection%20for%20Transient%20Noisy%20Environment%20Based%20on%20Diffusion%20Nets&rft.jtitle=arXiv.org&rft.au=Ivry,%20Amir&rft.date=2021-06-25&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2106.13763&rft_dat=%3Cproquest%3E2545775938%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a528-73aafebdc20b21eec64992e16431be663f2aedc8104f38bf4c1682c63a8065953%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2545775938&rft_id=info:pmid/&rfr_iscdi=true