Loading…
Effectiveness of the backoff hierarchical class n-gram language models to model unseen events in speech recognition
Backoff hierarchical class n-gram language models use a class hierarchy to define an appropriate context. Each node in the hierarchy is a class containing all the words of the descendant nodes (classes). The closer a node is to the root, the more general the corresponding class, and consequently the...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Backoff hierarchical class n-gram language models use a class hierarchy to define an appropriate context. Each node in the hierarchy is a class containing all the words of the descendant nodes (classes). The closer a node is to the root, the more general the corresponding class, and consequently the context, is. We demonstrate experimentally the effectiveness of the backoff hierarchical class n-gram language modeling approach to model unseen events in speech recognition: improvement is achieved over regular backoff n-gram models. We also study the performance of this approach on vocabularies of different sizes and we investigate the impact of the hierarchy depth on the performance of the model. Performance is presented on several databases such as switchboard, call-home and Wall Street Journal (WSJ). Experiments on switchboard and call-home databases, which contain a few unseen events in the test set, show up to 6% improvement on unseen events perplexity with a vocabulary of 16,800 words. With a relatively large number of unseen events on the WSJ test corpus and using two vocabulary sets of 5,000 and 20,000 words, we obtain up to 26% improvement on unseen events perplexity and up to 12% improvement in WER when a backoff hierarchical class trigram language model is used on an ASR test set. Results confirm that improvement is achieved when the number of unseen events increases. |
---|---|
DOI: | 10.1109/ASRU.2003.1318501 |