Loading…

Hidden variability subspace learning for adaptation of deep neural networks

This Letter proposes a deep neural network (DNN) adaptation method, herein referred to as the hidden variability subspace (HVS) method, to achieve improved robustness under diverse acoustic environments arising due to differences in conditions, e.g. speaker, channel, duration and environmental noise...

Full description

Saved in:
Bibliographic Details
Published in:Electronics letters 2018-02, Vol.54 (3), p.173-175
Main Authors: Fernando, S, Sethu, V, Ambikairajah, E
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This Letter proposes a deep neural network (DNN) adaptation method, herein referred to as the hidden variability subspace (HVS) method, to achieve improved robustness under diverse acoustic environments arising due to differences in conditions, e.g. speaker, channel, duration and environmental noise. In the proposed approach, a set of condition-dependent parameters is estimated to adapt the hidden layer weights of the DNN in the HVS to reduce the condition mismatch. These condition-dependent parameters are then connected to various layers through a new set of adaptively trained weights. The authors evaluate the proposed hidden variability learning method on a language identification task and show that significant performance gains can be obtained by discriminatively estimating a set of adaptation parameters to compensate the mismatch in the trained model.
ISSN:0013-5194
1350-911X
1350-911X
DOI:10.1049/el.2017.4027