Loading…

Adaptive Distance-Based Pooling in Convolutional Neural Networks for Audio Event Classification

In the last years, deep convolutional neural networks have become a standard for the development of state-of-the-art audio classification systems, taking the lead over traditional approaches based on feature engineering. While they are capable of achieving human performance under certain scenarios,...

Full description

Saved in:
Bibliographic Details
Published in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2020, Vol.28, p.1925-1935
Main Authors: Martin-Morato, Irene, Cobos, Maximo, Ferri, Francesc J.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the last years, deep convolutional neural networks have become a standard for the development of state-of-the-art audio classification systems, taking the lead over traditional approaches based on feature engineering. While they are capable of achieving human performance under certain scenarios, it has been shown that their accuracy is severely degraded when the systems are tested over noisy or weakly segmented events. Although better generalization could be obtained by increasing the size of the training dataset, e.g. by applying data augmentation techniques, this also leads to longer and more complex training procedures. In this article, we propose a new type of pooling layer aimed at compensating non-relevant information of audio events by applying an adaptive transformation of the convolutional feature maps in the temporal axis. The proposed layer performs a non-linear temporal transformation that follows a uniform distance subsampling criterion on the learned feature space. The experiments conducted over different datasets show significant performance improvements when the proposed layer is added to baseline models, resulting in systems that generalize better to mismatching test conditions and learn more robustly from weakly labeled data.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2020.3001683