Loading…

MuST-C: A multilingual corpus for end-to-end speech translation

•Problem: end-to-end speech translation requires large corpora to train neural models.•Contribution: MuST-C is a large multilingual corpus built from English TED Talks.•Corpus content: English speech, aligned transcription/translations in 14 languages.•Other key features: high topic and speaker vari...

Full description

Saved in:
Bibliographic Details
Published in:Computer speech & language 2021-03, Vol.66, p.101155, Article 101155
Main Authors: Cattoni, Roldano, Di Gangi, Mattia Antonino, Bentivogli, Luisa, Negri, Matteo, Turchi, Marco
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Problem: end-to-end speech translation requires large corpora to train neural models.•Contribution: MuST-C is a large multilingual corpus built from English TED Talks.•Corpus content: English speech, aligned transcription/translations in 14 languages.•Other key features: high topic and speaker variety, large size, free distribution.•Discussion: empirical/manual quality evaluation, baseline results on all languages. End-to-end spoken language translation (SLT) has recently gained popularity thanks to the advancement of sequence to sequence learning in its two parent tasks: automatic speech recognition (ASR) and machine translation (MT). However, research in the field has to confront with the scarcity of publicly available corpora to train data-hungry neural networks. Indeed, while traditional cascade solutions can build on sizable ASR and MT training data for a variety of languages, the available SLT corpora suitable for end-to-end training are few, typically small and of limited language coverage. We contribute to fill this gap by presenting MuST-C, a large and freely available Multilingual Speech Translation Corpus built from English TED Talks. Its unique features include: i) language coverage and diversity (from English into 14 languages from different families), ii) size (at least 237 hours of transcribed recordings per language, 430 on average), iii) variety of topics and speakers, and iv) data quality. Besides describing the corpus creation methodology and discussing the outcomes of empirical and manual quality evaluations, we present baseline results computed with strong systems on each language direction covered by MuST-C.
ISSN:0885-2308
1095-8363
DOI:10.1016/j.csl.2020.101155