Loading…
Unsupervised, smooth training of feed-forward neural networks for mismatch compensation
We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse c...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse conditions in speech recognition applications. We show that this technique is guaranteed to converge smoothly to the local maxima, and provides a more meaningful metric in speech recognition applications than the traditional mean square error. We apply the technique to model compensation to reduce the mismatch between training and testing in speech recognition applications and show that this data driven technique can be used under a wide variety of conditions without prior knowledge of the mismatch. |
---|---|
DOI: | 10.1109/ASRU.1997.659127 |