Loading…

A study of transformer-based end-to-end speech recognition system for Kazakh language

Today, the Transformer model, which allows parallelization and also has its own internal attention, has been widely used in the field of speech recognition. The great advantage of this architecture is the fast learning speed, and the lack of sequential operation, as with recurrent neural networks. I...

Full description

Saved in:
Bibliographic Details
Published in:Scientific reports 2022-05, Vol.12 (1), p.8337-8337, Article 8337
Main Authors: Orken, Mamyrbayev, Dina, Oralbekova, Keylan, Alimhan, Tolganay, Turdalykyzy, Mohamed, Othman
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Today, the Transformer model, which allows parallelization and also has its own internal attention, has been widely used in the field of speech recognition. The great advantage of this architecture is the fast learning speed, and the lack of sequential operation, as with recurrent neural networks. In this work, Transformer models and an end-to-end model based on connectionist temporal classification were considered to build a system for automatic recognition of Kazakh speech. It is known that Kazakh is part of a number of agglutinative languages and has limited data for implementing speech recognition systems. Some studies have shown that the Transformer model improves system performance for low-resource languages. Based on our experiments, it was revealed that the joint use of Transformer and connectionist temporal classification models contributed to improving the performance of the Kazakh speech recognition system and with an integrated language model it showed the best character error rate 3.7% on a clean dataset.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-022-12260-y