Loading…
Analysis of sequence to sequence neural networks on grapheme to phoneme conversion task
In this paper, we analyze the performance of various sequence to sequence neural networks on the task of grapheme to phoneme (G2P) conversion. G2P is a very important component in applications like text-to-speech, automatic speech recognition etc,. Because the number of graphemes that a word consist...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we analyze the performance of various sequence to sequence neural networks on the task of grapheme to phoneme (G2P) conversion. G2P is a very important component in applications like text-to-speech, automatic speech recognition etc,. Because the number of graphemes that a word consists of and the corresponding number of phonemes are different, they are first aligned and then mapped. With the recent advent of sequence to sequence neural networks, the alignment step can be skipped allowing us to directly map the input and output sequences. Although the sequence to sequence neural nets have been applied for this task very recently, there are some questions concerning the architecture that need to be addressed. We show in this paper that, complex recurrent neural network units (like long-short term memory cells) may not be required to achieve good performance on this task. Instead simple recurrent neural networks (RNN) will suffice. We also show that the encoder can be a uni-directional RNN as opposed to the usually preferred bi-directional RNN. Further, our experiments reveal that encoder-decoder models with soft-alignment outperforms fixed vector context counterpart. The results demonstrate that with very few parameters we can indeed achieve comparable performance to much more complicated architectures. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN.2016.7727552 |