Loading…

N-gram Boosting: Improving Contextual Biasing with Normalized N-gram Targets

Accurate transcription of proper names and technical terms is particularly important in speech-to-text applications for business conversations. These words, which are essential to understanding the conversation, are often rare and therefore likely to be under-represented in text and audio training d...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-08
Main Authors: Wang Yau Li, Nadig, Shreekantha, Chang, Karol, Zafarullah Mahmood, Wang, Riqiang, Vandieken, Simon, Robertson, Jonas, Mailhot, Fred
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate transcription of proper names and technical terms is particularly important in speech-to-text applications for business conversations. These words, which are essential to understanding the conversation, are often rare and therefore likely to be under-represented in text and audio training data, creating a significant challenge in this domain. We present a two-step keyword boosting mechanism that successfully works on normalized unigrams and n-grams rather than just single tokens, which eliminates missing hits issues with boosting raw targets. In addition, we show how adjusting the boosting weight logic avoids over-boosting multi-token keywords. This improves our keyword recognition rate by 26% relative on our proprietary in-domain dataset and 2% on LibriSpeech. This method is particularly useful on targets that involve non-alphabetic characters or have non-standard pronunciations.
ISSN:2331-8422