Loading…

What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations

This paper explores the pitfalls in evaluating multilingual automatic speech recognition (ASR) models, with a particular focus on Indic language scripts. We investigate the text normalization routine employed by leading ASR models, including OpenAI Whisper, Meta's MMS, Seamless, and Assembly AI...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-11
Main Authors: Manohar, Kavya, Pillai, Leena G, Sherly, Elizabeth
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Manohar, Kavya
Pillai, Leena G
Sherly, Elizabeth
description This paper explores the pitfalls in evaluating multilingual automatic speech recognition (ASR) models, with a particular focus on Indic language scripts. We investigate the text normalization routine employed by leading ASR models, including OpenAI Whisper, Meta's MMS, Seamless, and Assembly AI's Conformer, and their unintended consequences on performance metrics. Our research reveals that current text normalization practices, while aiming to standardize ASR outputs for fair comparison, by removing inconsistencies such as variations in spelling, punctuation, and special characters, are fundamentally flawed when applied to Indic scripts. Through empirical analysis using text similarity scores and in-depth linguistic examination, we demonstrate that these flaws lead to artificially improved performance metrics for Indic languages. We conclude by proposing a shift towards developing text normalization routines that leverage native linguistic expertise, ensuring more robust and accurate evaluations of multilingual ASR models.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3100998507</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3100998507</sourcerecordid><originalsourceid>FETCH-proquest_journals_31009985073</originalsourceid><addsrcrecordid>eNqNi9EKgjAYRkcQJOU7DLoW5papVxFhdGNEBV7KIKvJ32b7t4iePoseoKsD5zvfgARciDjKZpyPSIjYMsb4POVJIgJSVVfpqEIKBntqujX2JkG9pFNGL2jx7MBYpS90p9xZAuAnKj04Bb31EujysKelOTVAi4cE_z3ihAz7GpvwxzGZrovjahN11tx9g65ujbe6n2oRM5bnWcJS8V_1Bjg8QWY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3100998507</pqid></control><display><type>article</type><title>What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations</title><source>Publicly Available Content (ProQuest)</source><creator>Manohar, Kavya ; Pillai, Leena G ; Sherly, Elizabeth</creator><creatorcontrib>Manohar, Kavya ; Pillai, Leena G ; Sherly, Elizabeth</creatorcontrib><description>This paper explores the pitfalls in evaluating multilingual automatic speech recognition (ASR) models, with a particular focus on Indic language scripts. We investigate the text normalization routine employed by leading ASR models, including OpenAI Whisper, Meta's MMS, Seamless, and Assembly AI's Conformer, and their unintended consequences on performance metrics. Our research reveals that current text normalization practices, while aiming to standardize ASR outputs for fair comparison, by removing inconsistencies such as variations in spelling, punctuation, and special characters, are fundamentally flawed when applied to Indic scripts. Through empirical analysis using text similarity scores and in-depth linguistic examination, we demonstrate that these flaws lead to artificially improved performance metrics for Indic languages. We conclude by proposing a shift towards developing text normalization routines that leverage native linguistic expertise, ensuring more robust and accurate evaluations of multilingual ASR models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Automatic speech recognition ; Empirical analysis ; Linguistics ; Performance evaluation ; Performance measurement ; Scripts</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3100998507?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Manohar, Kavya</creatorcontrib><creatorcontrib>Pillai, Leena G</creatorcontrib><creatorcontrib>Sherly, Elizabeth</creatorcontrib><title>What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations</title><title>arXiv.org</title><description>This paper explores the pitfalls in evaluating multilingual automatic speech recognition (ASR) models, with a particular focus on Indic language scripts. We investigate the text normalization routine employed by leading ASR models, including OpenAI Whisper, Meta's MMS, Seamless, and Assembly AI's Conformer, and their unintended consequences on performance metrics. Our research reveals that current text normalization practices, while aiming to standardize ASR outputs for fair comparison, by removing inconsistencies such as variations in spelling, punctuation, and special characters, are fundamentally flawed when applied to Indic scripts. Through empirical analysis using text similarity scores and in-depth linguistic examination, we demonstrate that these flaws lead to artificially improved performance metrics for Indic languages. We conclude by proposing a shift towards developing text normalization routines that leverage native linguistic expertise, ensuring more robust and accurate evaluations of multilingual ASR models.</description><subject>Automatic speech recognition</subject><subject>Empirical analysis</subject><subject>Linguistics</subject><subject>Performance evaluation</subject><subject>Performance measurement</subject><subject>Scripts</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi9EKgjAYRkcQJOU7DLoW5papVxFhdGNEBV7KIKvJ32b7t4iePoseoKsD5zvfgARciDjKZpyPSIjYMsb4POVJIgJSVVfpqEIKBntqujX2JkG9pFNGL2jx7MBYpS90p9xZAuAnKj04Bb31EujysKelOTVAi4cE_z3ihAz7GpvwxzGZrovjahN11tx9g65ujbe6n2oRM5bnWcJS8V_1Bjg8QWY</recordid><startdate>20241109</startdate><enddate>20241109</enddate><creator>Manohar, Kavya</creator><creator>Pillai, Leena G</creator><creator>Sherly, Elizabeth</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241109</creationdate><title>What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations</title><author>Manohar, Kavya ; Pillai, Leena G ; Sherly, Elizabeth</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31009985073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Automatic speech recognition</topic><topic>Empirical analysis</topic><topic>Linguistics</topic><topic>Performance evaluation</topic><topic>Performance measurement</topic><topic>Scripts</topic><toplevel>online_resources</toplevel><creatorcontrib>Manohar, Kavya</creatorcontrib><creatorcontrib>Pillai, Leena G</creatorcontrib><creatorcontrib>Sherly, Elizabeth</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Manohar, Kavya</au><au>Pillai, Leena G</au><au>Sherly, Elizabeth</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations</atitle><jtitle>arXiv.org</jtitle><date>2024-11-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>This paper explores the pitfalls in evaluating multilingual automatic speech recognition (ASR) models, with a particular focus on Indic language scripts. We investigate the text normalization routine employed by leading ASR models, including OpenAI Whisper, Meta's MMS, Seamless, and Assembly AI's Conformer, and their unintended consequences on performance metrics. Our research reveals that current text normalization practices, while aiming to standardize ASR outputs for fair comparison, by removing inconsistencies such as variations in spelling, punctuation, and special characters, are fundamentally flawed when applied to Indic scripts. Through empirical analysis using text similarity scores and in-depth linguistic examination, we demonstrate that these flaws lead to artificially improved performance metrics for Indic languages. We conclude by proposing a shift towards developing text normalization routines that leverage native linguistic expertise, ensuring more robust and accurate evaluations of multilingual ASR models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3100998507
source Publicly Available Content (ProQuest)
subjects Automatic speech recognition
Empirical analysis
Linguistics
Performance evaluation
Performance measurement
Scripts
title What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T02%3A51%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=What%20is%20lost%20in%20Normalization?%20Exploring%20Pitfalls%20in%20Multilingual%20ASR%20Model%20Evaluations&rft.jtitle=arXiv.org&rft.au=Manohar,%20Kavya&rft.date=2024-11-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3100998507%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31009985073%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3100998507&rft_id=info:pmid/&rfr_iscdi=true