Loading…
Iterative normalization for speaker-adaptive training in continuous speech recognition
The authors present several techniques to improve an algorithm presented last year for speaker-adaptive training in continuous speech recognition. The previous method uses a transformation matrix to modify the hidden Markov model (HMM) parameters of a prechosen prototype speaker to model a target sp...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The authors present several techniques to improve an algorithm presented last year for speaker-adaptive training in continuous speech recognition. The previous method uses a transformation matrix to modify the hidden Markov model (HMM) parameters of a prechosen prototype speaker to model a target speaker. To estimate the transformation matrix, it aligns a set of target speech with the same set of speech uttered by the prototype speaker using dynamic time warping. The authors focus on improving the previous method in the modeling of the spectral differences between two speakers, and the accuracy of the alignment. To improve the modeling of the spectral differences, they implemented a phoneme-dependent mapping procedure which transforms the prototype HMMs to the estimated target HMMs using a set of phoneme-dependent matrices. To improve the alignment, the authors developed a modeling of the silence, a linear duration normalization, and an iterative normalization procedure. They tested the new methods in the standard DARPA database with a grammar of perplexity 60. The performance shows a 30% word-error reduction compared to the previous algorithm.< > |
---|---|
ISSN: | 1520-6149 2379-190X |
DOI: | 10.1109/ICASSP.1989.266501 |