Loading…

Greedy double sparse dictionary learning for sparse representation of speech signals

This paper proposes a greedy double sparse (DS) dictionary learning algorithm for speech signals, where the dictionary is the product of a predefined base dictionary, and a sparse matrix. Exploiting the DS structure, we show that the dictionary can be learned efficiently in the coefficient domain ra...

Full description

Saved in:
Bibliographic Details
Published in:Speech communication 2016-12, Vol.85, p.71-82
Main Authors: Abrol, V., Sharma, P., Sao, A.K.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper proposes a greedy double sparse (DS) dictionary learning algorithm for speech signals, where the dictionary is the product of a predefined base dictionary, and a sparse matrix. Exploiting the DS structure, we show that the dictionary can be learned efficiently in the coefficient domain rather than the signal domain. It is achieved by modifying the objective function such that all the matrices involved in the coefficient domain are either sparse or near-sparse, thus making the dictionary update stage fast. The dictionary is learned on frames extracted from a speech signal using a hierarchical subset selection approach. Here, each dictionary atom is a training speech frame, chosen in accordance to its energy contribution for representing all other training speech frames. In other words, dictionary atoms are encouraged to be close to the training signals that uses them in their decomposition. After each atom update the modified residual serves as the new training data, thus the information learned by the previous atoms guides the update of subsequent dictionary atoms. In addition, we have shown that for a suitable choice of the base dictionary, storage efficiency of the DS dictionary can be further improved. Finally, the efficiency of the proposed method is demonstrated for the problem of speech representation and speech denoising.
ISSN:0167-6393
1872-7182
DOI:10.1016/j.specom.2016.09.004