Loading…
A supervised learning method for tempo estimation of musical audio
Automatic tempo estimation for musical audio with low pulse clarity presents challenges. In order to increase the pulse clarity of the input audio signals, the proposed method applies source filtering, especially low pass filtering, to the raw audio, so there are multiple audio clips for the process...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 604 |
container_issue | |
container_start_page | 599 |
container_title | |
container_volume | |
creator | Wu, Fu-Hai Frank Jang, Jyh-Shing Roger |
description | Automatic tempo estimation for musical audio with low pulse clarity presents challenges. In order to increase the pulse clarity of the input audio signals, the proposed method applies source filtering, especially low pass filtering, to the raw audio, so there are multiple audio clips for the processes. These processes are based on tempogram derived from onset detection function to obtain the tempo pair, which is the output of tempo-pair estimator, and their relative strength by the long-term periodicity (LTP) function. Finally, a classifier-based selector chooses the best estimated results from the different paths of audio. The performance of 1 st place in at-least-one-tempo-correct index and 2 nd place in P-score index in the evaluation MIREX 2013 audio tempo estimation demonstrate the effectiveness of the proposed method to audio tempo estimation. |
doi_str_mv | 10.1109/MED.2014.6961438 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6961438</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6961438</ieee_id><sourcerecordid>6961438</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-e7c48ea98300424b832505f05e78d888e307031d09925aa7c4308584ac2b23cb3</originalsourceid><addsrcrecordid>eNpVj01LxDAURSMiKGP3gpv8gdaXr-ZlOY7jKIy40fWQtq8aaZvStIL_3gFn4-py4XA5l7EbAYUQ4O5etg-FBKGL0pVCKzxjmbMotHXOOBDq_F-H8pJlKX0BgHAlGiev2P2ap2Wk6TskanhHfhrC8MF7mj9jw9s48Zn6MXJKc-j9HOLAY8v7JYXad9wvTYjX7KL1XaLslCv2_rh92zzl-9fd82a9z4OwZs7J1hrJO1QAWuoKlTRgWjBksUFEUmBBiQack8b7I60ADWpfy0qqulIrdvu3G4joME5Hn-nncHqufgG8OUqk</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A supervised learning method for tempo estimation of musical audio</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Wu, Fu-Hai Frank ; Jang, Jyh-Shing Roger</creator><creatorcontrib>Wu, Fu-Hai Frank ; Jang, Jyh-Shing Roger</creatorcontrib><description>Automatic tempo estimation for musical audio with low pulse clarity presents challenges. In order to increase the pulse clarity of the input audio signals, the proposed method applies source filtering, especially low pass filtering, to the raw audio, so there are multiple audio clips for the processes. These processes are based on tempogram derived from onset detection function to obtain the tempo pair, which is the output of tempo-pair estimator, and their relative strength by the long-term periodicity (LTP) function. Finally, a classifier-based selector chooses the best estimated results from the different paths of audio. The performance of 1 st place in at-least-one-tempo-correct index and 2 nd place in P-score index in the evaluation MIREX 2013 audio tempo estimation demonstrate the effectiveness of the proposed method to audio tempo estimation.</description><identifier>ISBN: 9781479959006</identifier><identifier>ISBN: 1479959006</identifier><identifier>EISBN: 9781479959013</identifier><identifier>EISBN: 1479959014</identifier><identifier>DOI: 10.1109/MED.2014.6961438</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Estimation ; Feature extraction ; Filtering ; Indexes ; Long-Term Periodicity (LTP) ; Mathematical model ; Pulse Clarity ; Tempo Estimation ; Tempo-Pair Model ; Tempogram ; Training</subject><ispartof>22nd Mediterranean Conference on Control and Automation, 2014, p.599-604</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6961438$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6961438$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wu, Fu-Hai Frank</creatorcontrib><creatorcontrib>Jang, Jyh-Shing Roger</creatorcontrib><title>A supervised learning method for tempo estimation of musical audio</title><title>22nd Mediterranean Conference on Control and Automation</title><addtitle>MED</addtitle><description>Automatic tempo estimation for musical audio with low pulse clarity presents challenges. In order to increase the pulse clarity of the input audio signals, the proposed method applies source filtering, especially low pass filtering, to the raw audio, so there are multiple audio clips for the processes. These processes are based on tempogram derived from onset detection function to obtain the tempo pair, which is the output of tempo-pair estimator, and their relative strength by the long-term periodicity (LTP) function. Finally, a classifier-based selector chooses the best estimated results from the different paths of audio. The performance of 1 st place in at-least-one-tempo-correct index and 2 nd place in P-score index in the evaluation MIREX 2013 audio tempo estimation demonstrate the effectiveness of the proposed method to audio tempo estimation.</description><subject>Accuracy</subject><subject>Estimation</subject><subject>Feature extraction</subject><subject>Filtering</subject><subject>Indexes</subject><subject>Long-Term Periodicity (LTP)</subject><subject>Mathematical model</subject><subject>Pulse Clarity</subject><subject>Tempo Estimation</subject><subject>Tempo-Pair Model</subject><subject>Tempogram</subject><subject>Training</subject><isbn>9781479959006</isbn><isbn>1479959006</isbn><isbn>9781479959013</isbn><isbn>1479959014</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2014</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNpVj01LxDAURSMiKGP3gpv8gdaXr-ZlOY7jKIy40fWQtq8aaZvStIL_3gFn4-py4XA5l7EbAYUQ4O5etg-FBKGL0pVCKzxjmbMotHXOOBDq_F-H8pJlKX0BgHAlGiev2P2ap2Wk6TskanhHfhrC8MF7mj9jw9s48Zn6MXJKc-j9HOLAY8v7JYXad9wvTYjX7KL1XaLslCv2_rh92zzl-9fd82a9z4OwZs7J1hrJO1QAWuoKlTRgWjBksUFEUmBBiQack8b7I60ADWpfy0qqulIrdvu3G4joME5Hn-nncHqufgG8OUqk</recordid><startdate>201406</startdate><enddate>201406</enddate><creator>Wu, Fu-Hai Frank</creator><creator>Jang, Jyh-Shing Roger</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201406</creationdate><title>A supervised learning method for tempo estimation of musical audio</title><author>Wu, Fu-Hai Frank ; Jang, Jyh-Shing Roger</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-e7c48ea98300424b832505f05e78d888e307031d09925aa7c4308584ac2b23cb3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Accuracy</topic><topic>Estimation</topic><topic>Feature extraction</topic><topic>Filtering</topic><topic>Indexes</topic><topic>Long-Term Periodicity (LTP)</topic><topic>Mathematical model</topic><topic>Pulse Clarity</topic><topic>Tempo Estimation</topic><topic>Tempo-Pair Model</topic><topic>Tempogram</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Fu-Hai Frank</creatorcontrib><creatorcontrib>Jang, Jyh-Shing Roger</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEL</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Fu-Hai Frank</au><au>Jang, Jyh-Shing Roger</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A supervised learning method for tempo estimation of musical audio</atitle><btitle>22nd Mediterranean Conference on Control and Automation</btitle><stitle>MED</stitle><date>2014-06</date><risdate>2014</risdate><spage>599</spage><epage>604</epage><pages>599-604</pages><isbn>9781479959006</isbn><isbn>1479959006</isbn><eisbn>9781479959013</eisbn><eisbn>1479959014</eisbn><abstract>Automatic tempo estimation for musical audio with low pulse clarity presents challenges. In order to increase the pulse clarity of the input audio signals, the proposed method applies source filtering, especially low pass filtering, to the raw audio, so there are multiple audio clips for the processes. These processes are based on tempogram derived from onset detection function to obtain the tempo pair, which is the output of tempo-pair estimator, and their relative strength by the long-term periodicity (LTP) function. Finally, a classifier-based selector chooses the best estimated results from the different paths of audio. The performance of 1 st place in at-least-one-tempo-correct index and 2 nd place in P-score index in the evaluation MIREX 2013 audio tempo estimation demonstrate the effectiveness of the proposed method to audio tempo estimation.</abstract><pub>IEEE</pub><doi>10.1109/MED.2014.6961438</doi><tpages>6</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISBN: 9781479959006 |
ispartof | 22nd Mediterranean Conference on Control and Automation, 2014, p.599-604 |
issn | |
language | eng |
recordid | cdi_ieee_primary_6961438 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Accuracy Estimation Feature extraction Filtering Indexes Long-Term Periodicity (LTP) Mathematical model Pulse Clarity Tempo Estimation Tempo-Pair Model Tempogram Training |
title | A supervised learning method for tempo estimation of musical audio |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T14%3A35%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20supervised%20learning%20method%20for%20tempo%20estimation%20of%20musical%20audio&rft.btitle=22nd%20Mediterranean%20Conference%20on%20Control%20and%20Automation&rft.au=Wu,%20Fu-Hai%20Frank&rft.date=2014-06&rft.spage=599&rft.epage=604&rft.pages=599-604&rft.isbn=9781479959006&rft.isbn_list=1479959006&rft_id=info:doi/10.1109/MED.2014.6961438&rft.eisbn=9781479959013&rft.eisbn_list=1479959014&rft_dat=%3Cieee_6IE%3E6961438%3C/ieee_6IE%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i175t-e7c48ea98300424b832505f05e78d888e307031d09925aa7c4308584ac2b23cb3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6961438&rfr_iscdi=true |