Loading…

Multi-Scale multi-band densenets for audio source separation

This paper deals with the problem of audio source separation. To handle the complex and ill-posed nature of the problems of audio source separation, the current state-of-the-art approaches employ deep neural networks to obtain instrumental spectra from a mixture. In this study, we propose a novel ne...

Full description

Saved in:
Bibliographic Details
Main Authors: Takahashi, Naoya, Mitsufuji, Yuki
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c225t-d5b37b235669aeab037619743ae32ad361ba2c25aa73148818eeed953415846c3
cites
container_end_page 25
container_issue
container_start_page 21
container_title
container_volume
creator Takahashi, Naoya
Mitsufuji, Yuki
description This paper deals with the problem of audio source separation. To handle the complex and ill-posed nature of the problems of audio source separation, the current state-of-the-art approaches employ deep neural networks to obtain instrumental spectra from a mixture. In this study, we propose a novel network architecture that extends the recently developed densely connected convolutional network (DenseNet), which has shown excellent results on image classification tasks. To deal with the specific problem of audio source separation, an up-sampling layer, block skip connection and band-dedicated dense blocks are incorporated on top of DenseNet. The proposed approach takes advantage of long contextual information and outperforms state-of-the-art results on SiSEC 2016 competition by a large margin in terms of signal-to-distortion ratio. Moreover, the proposed architecture requires significantly fewer parameters and considerably less training time compared with other methods.
doi_str_mv 10.1109/WASPAA.2017.8169987
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8169987</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8169987</ieee_id><sourcerecordid>8169987</sourcerecordid><originalsourceid>FETCH-LOGICAL-c225t-d5b37b235669aeab037619743ae32ad361ba2c25aa73148818eeed953415846c3</originalsourceid><addsrcrecordid>eNotj81Kw0AURkdBsNY-QTfzAqlz584vuAlFrVBRqOKy3GRuIZImJZMsfHtFu_oOHDjwCbEEtQJQ8e6z3L2V5Uor8KsALsbgL8Qi-gAWgwOHGi7FDKLxBTgdr8VNzl9KWR2Mmon7l6kdm2JXU8vy-McVdUkm7jJ3PGZ56AdJU2p6mftpqFlmPtFAY9N3t-LqQG3mxXnn4uPx4X29KbavT8_rclvUWtuxSLZCX2m0zkViqhR6B9EbJEZNCR1UpGttiTyCCQECM6do0YANxtU4F8v_bvMr9qehOdLwvT-fxR8e60hA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Multi-Scale multi-band densenets for audio source separation</title><source>IEEE Xplore All Conference Series</source><creator>Takahashi, Naoya ; Mitsufuji, Yuki</creator><creatorcontrib>Takahashi, Naoya ; Mitsufuji, Yuki</creatorcontrib><description>This paper deals with the problem of audio source separation. To handle the complex and ill-posed nature of the problems of audio source separation, the current state-of-the-art approaches employ deep neural networks to obtain instrumental spectra from a mixture. In this study, we propose a novel network architecture that extends the recently developed densely connected convolutional network (DenseNet), which has shown excellent results on image classification tasks. To deal with the specific problem of audio source separation, an up-sampling layer, block skip connection and band-dedicated dense blocks are incorporated on top of DenseNet. The proposed approach takes advantage of long contextual information and outperforms state-of-the-art results on SiSEC 2016 competition by a large margin in terms of signal-to-distortion ratio. Moreover, the proposed architecture requires significantly fewer parameters and considerably less training time compared with other methods.</description><identifier>EISSN: 1947-1629</identifier><identifier>EISBN: 9781538616321</identifier><identifier>EISBN: 1538616327</identifier><identifier>DOI: 10.1109/WASPAA.2017.8169987</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computer architecture ; Convolution ; convolutional neural networks ; DenseNet ; Kernel ; multi-band ; Source separation ; Spectrogram ; Training</subject><ispartof>2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2017, p.21-25</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c225t-d5b37b235669aeab037619743ae32ad361ba2c25aa73148818eeed953415846c3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8169987$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8169987$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Takahashi, Naoya</creatorcontrib><creatorcontrib>Mitsufuji, Yuki</creatorcontrib><title>Multi-Scale multi-band densenets for audio source separation</title><title>2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)</title><addtitle>WASPAA</addtitle><description>This paper deals with the problem of audio source separation. To handle the complex and ill-posed nature of the problems of audio source separation, the current state-of-the-art approaches employ deep neural networks to obtain instrumental spectra from a mixture. In this study, we propose a novel network architecture that extends the recently developed densely connected convolutional network (DenseNet), which has shown excellent results on image classification tasks. To deal with the specific problem of audio source separation, an up-sampling layer, block skip connection and band-dedicated dense blocks are incorporated on top of DenseNet. The proposed approach takes advantage of long contextual information and outperforms state-of-the-art results on SiSEC 2016 competition by a large margin in terms of signal-to-distortion ratio. Moreover, the proposed architecture requires significantly fewer parameters and considerably less training time compared with other methods.</description><subject>Computer architecture</subject><subject>Convolution</subject><subject>convolutional neural networks</subject><subject>DenseNet</subject><subject>Kernel</subject><subject>multi-band</subject><subject>Source separation</subject><subject>Spectrogram</subject><subject>Training</subject><issn>1947-1629</issn><isbn>9781538616321</isbn><isbn>1538616327</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2017</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj81Kw0AURkdBsNY-QTfzAqlz584vuAlFrVBRqOKy3GRuIZImJZMsfHtFu_oOHDjwCbEEtQJQ8e6z3L2V5Uor8KsALsbgL8Qi-gAWgwOHGi7FDKLxBTgdr8VNzl9KWR2Mmon7l6kdm2JXU8vy-McVdUkm7jJ3PGZ56AdJU2p6mftpqFlmPtFAY9N3t-LqQG3mxXnn4uPx4X29KbavT8_rclvUWtuxSLZCX2m0zkViqhR6B9EbJEZNCR1UpGttiTyCCQECM6do0YANxtU4F8v_bvMr9qehOdLwvT-fxR8e60hA</recordid><startdate>201710</startdate><enddate>201710</enddate><creator>Takahashi, Naoya</creator><creator>Mitsufuji, Yuki</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201710</creationdate><title>Multi-Scale multi-band densenets for audio source separation</title><author>Takahashi, Naoya ; Mitsufuji, Yuki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c225t-d5b37b235669aeab037619743ae32ad361ba2c25aa73148818eeed953415846c3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer architecture</topic><topic>Convolution</topic><topic>convolutional neural networks</topic><topic>DenseNet</topic><topic>Kernel</topic><topic>multi-band</topic><topic>Source separation</topic><topic>Spectrogram</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Takahashi, Naoya</creatorcontrib><creatorcontrib>Mitsufuji, Yuki</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Takahashi, Naoya</au><au>Mitsufuji, Yuki</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Multi-Scale multi-band densenets for audio source separation</atitle><btitle>2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)</btitle><stitle>WASPAA</stitle><date>2017-10</date><risdate>2017</risdate><spage>21</spage><epage>25</epage><pages>21-25</pages><eissn>1947-1629</eissn><eisbn>9781538616321</eisbn><eisbn>1538616327</eisbn><abstract>This paper deals with the problem of audio source separation. To handle the complex and ill-posed nature of the problems of audio source separation, the current state-of-the-art approaches employ deep neural networks to obtain instrumental spectra from a mixture. In this study, we propose a novel network architecture that extends the recently developed densely connected convolutional network (DenseNet), which has shown excellent results on image classification tasks. To deal with the specific problem of audio source separation, an up-sampling layer, block skip connection and band-dedicated dense blocks are incorporated on top of DenseNet. The proposed approach takes advantage of long contextual information and outperforms state-of-the-art results on SiSEC 2016 competition by a large margin in terms of signal-to-distortion ratio. Moreover, the proposed architecture requires significantly fewer parameters and considerably less training time compared with other methods.</abstract><pub>IEEE</pub><doi>10.1109/WASPAA.2017.8169987</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 1947-1629
ispartof 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2017, p.21-25
issn 1947-1629
language eng
recordid cdi_ieee_primary_8169987
source IEEE Xplore All Conference Series
subjects Computer architecture
Convolution
convolutional neural networks
DenseNet
Kernel
multi-band
Source separation
Spectrogram
Training
title Multi-Scale multi-band densenets for audio source separation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T16%3A17%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Multi-Scale%20multi-band%20densenets%20for%20audio%20source%20separation&rft.btitle=2017%20IEEE%20Workshop%20on%20Applications%20of%20Signal%20Processing%20to%20Audio%20and%20Acoustics%20(WASPAA)&rft.au=Takahashi,%20Naoya&rft.date=2017-10&rft.spage=21&rft.epage=25&rft.pages=21-25&rft.eissn=1947-1629&rft_id=info:doi/10.1109/WASPAA.2017.8169987&rft.eisbn=9781538616321&rft.eisbn_list=1538616327&rft_dat=%3Cieee_CHZPO%3E8169987%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c225t-d5b37b235669aeab037619743ae32ad361ba2c25aa73148818eeed953415846c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8169987&rfr_iscdi=true