Loading…

S-DCCRN: Super Wide Band DCCRN with Learnable Complex Feature for Speech Enhancement

In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum. Most of the recent speech enhancement approaches mainly focus on wide-band signal with a sampling rate of 16K Hz. However, research on super wide band (e.g....

Full description

Saved in:
Bibliographic Details
Main Authors: Lv, Shubo, Fu, Yihui, Xing, Mengtao, Sun, Jiayao, Xie, Lei, Huang, Jun, Wang, Yannan, Yu, Tao
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c258t-b715c389fe0d162e8746fb0f5d5929ef1f67921c358a1f0c293312af572d508e3
cites
container_end_page 7771
container_issue
container_start_page 7767
container_title
container_volume
creator Lv, Shubo
Fu, Yihui
Xing, Mengtao
Sun, Jiayao
Xie, Lei
Huang, Jun
Wang, Yannan
Yu, Tao
description In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum. Most of the recent speech enhancement approaches mainly focus on wide-band signal with a sampling rate of 16K Hz. However, research on super wide band (e.g., 32K Hz) or even full-band (48K) denoising using deep learning is still in its infancy due to the difficulty of modeling more frequency bands and particularly high frequency components. In this paper, we extend our previous deep complex convolution recurrent neural network (DCCRN) substantially to a super wide band version-S-DCCRN, to perform speech denoising on speech of 32K Hz sampling rate. We first employ a cascaded sub-band and full-band processing module, which consists of two small-footprint DCCRNs-one operates on sub-band signal and one operates on full-band signal, aiming at benefiting from both local and global frequency information. Moreover, instead of simply adopting the STFT feature as input, we use a complex feature encoder trained in an end-to-end manner to refine the information of different frequency bands. We also use a complex feature decoder to revert the feature to time-frequency domain. Finally, a learnable spectrum compression method is adopted to adjust the energy of different frequency bands, which is beneficial for neural network learning. The proposed model, S-DCCRN, has surpassed PercepNet as well as several competitive models and achieves state-of-the-art performance in terms of speech quality and intelligibility. Ablation studies also demonstrate the effectiveness of different contributions.
doi_str_mv 10.1109/ICASSP43922.2022.9747029
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9747029</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9747029</ieee_id><sourcerecordid>9747029</sourcerecordid><originalsourceid>FETCH-LOGICAL-c258t-b715c389fe0d162e8746fb0f5d5929ef1f67921c358a1f0c293312af572d508e3</originalsourceid><addsrcrecordid>eNotkN1Kw0AUhFdBsK0-gTf7Aqlnd7N_3mlsVQgqpqJ3ZZOcJZEkDZuU6tsbtDDMwFwMH0MIZbBkDOz1U3KbZa-xsJwvOUxmdayB2xMyZ0rJGCapUzLjQtuIWfg8J_Nh-AIAo2MzI5ssuk-St-cbmu17DPSjLpHeua6kfzU91GNFU3Shc3mDNNm1fYPfdI1u3Aekfhdo1iMWFV11lesKbLEbL8iZd82Al8dckPf1apM8RunLwwScRgWXZoxyzWQhjPUIJVMcJyLlc_CylJZb9MwrbTkrhDSOeSi4FYJx56XmpQSDYkGu_ndrRNz2oW5d-NkeHxC_MylPGg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>S-DCCRN: Super Wide Band DCCRN with Learnable Complex Feature for Speech Enhancement</title><source>IEEE Xplore All Conference Series</source><creator>Lv, Shubo ; Fu, Yihui ; Xing, Mengtao ; Sun, Jiayao ; Xie, Lei ; Huang, Jun ; Wang, Yannan ; Yu, Tao</creator><creatorcontrib>Lv, Shubo ; Fu, Yihui ; Xing, Mengtao ; Sun, Jiayao ; Xie, Lei ; Huang, Jun ; Wang, Yannan ; Yu, Tao</creatorcontrib><description>In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum. Most of the recent speech enhancement approaches mainly focus on wide-band signal with a sampling rate of 16K Hz. However, research on super wide band (e.g., 32K Hz) or even full-band (48K) denoising using deep learning is still in its infancy due to the difficulty of modeling more frequency bands and particularly high frequency components. In this paper, we extend our previous deep complex convolution recurrent neural network (DCCRN) substantially to a super wide band version-S-DCCRN, to perform speech denoising on speech of 32K Hz sampling rate. We first employ a cascaded sub-band and full-band processing module, which consists of two small-footprint DCCRNs-one operates on sub-band signal and one operates on full-band signal, aiming at benefiting from both local and global frequency information. Moreover, instead of simply adopting the STFT feature as input, we use a complex feature encoder trained in an end-to-end manner to refine the information of different frequency bands. We also use a complex feature decoder to revert the feature to time-frequency domain. Finally, a learnable spectrum compression method is adopted to adjust the energy of different frequency bands, which is beneficial for neural network learning. The proposed model, S-DCCRN, has surpassed PercepNet as well as several competitive models and achieves state-of-the-art performance in terms of speech quality and intelligibility. Ablation studies also demonstrate the effectiveness of different contributions.</description><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 1665405406</identifier><identifier>EISBN: 9781665405409</identifier><identifier>DOI: 10.1109/ICASSP43922.2022.9747029</identifier><language>eng</language><publisher>IEEE</publisher><subject>Conferences ; Convolution ; Deep learning ; Information processing ; Noise reduction ; Recurrent neural networks ; S-DCCRN ; speech enhancement ; super wide band ; Time-frequency analysis</subject><ispartof>ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, p.7767-7771</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c258t-b715c389fe0d162e8746fb0f5d5929ef1f67921c358a1f0c293312af572d508e3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9747029$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9747029$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Lv, Shubo</creatorcontrib><creatorcontrib>Fu, Yihui</creatorcontrib><creatorcontrib>Xing, Mengtao</creatorcontrib><creatorcontrib>Sun, Jiayao</creatorcontrib><creatorcontrib>Xie, Lei</creatorcontrib><creatorcontrib>Huang, Jun</creatorcontrib><creatorcontrib>Wang, Yannan</creatorcontrib><creatorcontrib>Yu, Tao</creatorcontrib><title>S-DCCRN: Super Wide Band DCCRN with Learnable Complex Feature for Speech Enhancement</title><title>ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title><addtitle>ICASSP</addtitle><description>In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum. Most of the recent speech enhancement approaches mainly focus on wide-band signal with a sampling rate of 16K Hz. However, research on super wide band (e.g., 32K Hz) or even full-band (48K) denoising using deep learning is still in its infancy due to the difficulty of modeling more frequency bands and particularly high frequency components. In this paper, we extend our previous deep complex convolution recurrent neural network (DCCRN) substantially to a super wide band version-S-DCCRN, to perform speech denoising on speech of 32K Hz sampling rate. We first employ a cascaded sub-band and full-band processing module, which consists of two small-footprint DCCRNs-one operates on sub-band signal and one operates on full-band signal, aiming at benefiting from both local and global frequency information. Moreover, instead of simply adopting the STFT feature as input, we use a complex feature encoder trained in an end-to-end manner to refine the information of different frequency bands. We also use a complex feature decoder to revert the feature to time-frequency domain. Finally, a learnable spectrum compression method is adopted to adjust the energy of different frequency bands, which is beneficial for neural network learning. The proposed model, S-DCCRN, has surpassed PercepNet as well as several competitive models and achieves state-of-the-art performance in terms of speech quality and intelligibility. Ablation studies also demonstrate the effectiveness of different contributions.</description><subject>Conferences</subject><subject>Convolution</subject><subject>Deep learning</subject><subject>Information processing</subject><subject>Noise reduction</subject><subject>Recurrent neural networks</subject><subject>S-DCCRN</subject><subject>speech enhancement</subject><subject>super wide band</subject><subject>Time-frequency analysis</subject><issn>2379-190X</issn><isbn>1665405406</isbn><isbn>9781665405409</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkN1Kw0AUhFdBsK0-gTf7Aqlnd7N_3mlsVQgqpqJ3ZZOcJZEkDZuU6tsbtDDMwFwMH0MIZbBkDOz1U3KbZa-xsJwvOUxmdayB2xMyZ0rJGCapUzLjQtuIWfg8J_Nh-AIAo2MzI5ssuk-St-cbmu17DPSjLpHeua6kfzU91GNFU3Shc3mDNNm1fYPfdI1u3Aekfhdo1iMWFV11lesKbLEbL8iZd82Al8dckPf1apM8RunLwwScRgWXZoxyzWQhjPUIJVMcJyLlc_CylJZb9MwrbTkrhDSOeSi4FYJx56XmpQSDYkGu_ndrRNz2oW5d-NkeHxC_MylPGg</recordid><startdate>20220523</startdate><enddate>20220523</enddate><creator>Lv, Shubo</creator><creator>Fu, Yihui</creator><creator>Xing, Mengtao</creator><creator>Sun, Jiayao</creator><creator>Xie, Lei</creator><creator>Huang, Jun</creator><creator>Wang, Yannan</creator><creator>Yu, Tao</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20220523</creationdate><title>S-DCCRN: Super Wide Band DCCRN with Learnable Complex Feature for Speech Enhancement</title><author>Lv, Shubo ; Fu, Yihui ; Xing, Mengtao ; Sun, Jiayao ; Xie, Lei ; Huang, Jun ; Wang, Yannan ; Yu, Tao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c258t-b715c389fe0d162e8746fb0f5d5929ef1f67921c358a1f0c293312af572d508e3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Conferences</topic><topic>Convolution</topic><topic>Deep learning</topic><topic>Information processing</topic><topic>Noise reduction</topic><topic>Recurrent neural networks</topic><topic>S-DCCRN</topic><topic>speech enhancement</topic><topic>super wide band</topic><topic>Time-frequency analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Lv, Shubo</creatorcontrib><creatorcontrib>Fu, Yihui</creatorcontrib><creatorcontrib>Xing, Mengtao</creatorcontrib><creatorcontrib>Sun, Jiayao</creatorcontrib><creatorcontrib>Xie, Lei</creatorcontrib><creatorcontrib>Huang, Jun</creatorcontrib><creatorcontrib>Wang, Yannan</creatorcontrib><creatorcontrib>Yu, Tao</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore (Online service)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lv, Shubo</au><au>Fu, Yihui</au><au>Xing, Mengtao</au><au>Sun, Jiayao</au><au>Xie, Lei</au><au>Huang, Jun</au><au>Wang, Yannan</au><au>Yu, Tao</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>S-DCCRN: Super Wide Band DCCRN with Learnable Complex Feature for Speech Enhancement</atitle><btitle>ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</btitle><stitle>ICASSP</stitle><date>2022-05-23</date><risdate>2022</risdate><spage>7767</spage><epage>7771</epage><pages>7767-7771</pages><eissn>2379-190X</eissn><eisbn>1665405406</eisbn><eisbn>9781665405409</eisbn><abstract>In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum. Most of the recent speech enhancement approaches mainly focus on wide-band signal with a sampling rate of 16K Hz. However, research on super wide band (e.g., 32K Hz) or even full-band (48K) denoising using deep learning is still in its infancy due to the difficulty of modeling more frequency bands and particularly high frequency components. In this paper, we extend our previous deep complex convolution recurrent neural network (DCCRN) substantially to a super wide band version-S-DCCRN, to perform speech denoising on speech of 32K Hz sampling rate. We first employ a cascaded sub-band and full-band processing module, which consists of two small-footprint DCCRNs-one operates on sub-band signal and one operates on full-band signal, aiming at benefiting from both local and global frequency information. Moreover, instead of simply adopting the STFT feature as input, we use a complex feature encoder trained in an end-to-end manner to refine the information of different frequency bands. We also use a complex feature decoder to revert the feature to time-frequency domain. Finally, a learnable spectrum compression method is adopted to adjust the energy of different frequency bands, which is beneficial for neural network learning. The proposed model, S-DCCRN, has surpassed PercepNet as well as several competitive models and achieves state-of-the-art performance in terms of speech quality and intelligibility. Ablation studies also demonstrate the effectiveness of different contributions.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP43922.2022.9747029</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2379-190X
ispartof ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, p.7767-7771
issn 2379-190X
language eng
recordid cdi_ieee_primary_9747029
source IEEE Xplore All Conference Series
subjects Conferences
Convolution
Deep learning
Information processing
Noise reduction
Recurrent neural networks
S-DCCRN
speech enhancement
super wide band
Time-frequency analysis
title S-DCCRN: Super Wide Band DCCRN with Learnable Complex Feature for Speech Enhancement
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T12%3A00%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=S-DCCRN:%20Super%20Wide%20Band%20DCCRN%20with%20Learnable%20Complex%20Feature%20for%20Speech%20Enhancement&rft.btitle=ICASSP%202022%20-%202022%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing%20(ICASSP)&rft.au=Lv,%20Shubo&rft.date=2022-05-23&rft.spage=7767&rft.epage=7771&rft.pages=7767-7771&rft.eissn=2379-190X&rft_id=info:doi/10.1109/ICASSP43922.2022.9747029&rft.eisbn=1665405406&rft.eisbn_list=9781665405409&rft_dat=%3Cieee_CHZPO%3E9747029%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c258t-b715c389fe0d162e8746fb0f5d5929ef1f67921c358a1f0c293312af572d508e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9747029&rfr_iscdi=true