Loading…
Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation
Simultaneous speech translation (SST) can be evaluated on simulated online events where human evaluators watch subtitled videos and continuously express their satisfaction by pressing buttons (so called Continuous Rating). Continuous Rating is easy to collect, but little is known about its reliabili...
Saved in:
Published in: | arXiv.org 2024-11 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Javorský, Dávid Macháček, Dominik Bojar, Ondřej |
description | Simultaneous speech translation (SST) can be evaluated on simulated online events where human evaluators watch subtitled videos and continuously express their satisfaction by pressing buttons (so called Continuous Rating). Continuous Rating is easy to collect, but little is known about its reliability, or relation to comprehension of foreign language document by SST users. In this paper, we contrast Continuous Rating with factual questionnaires on judges with different levels of source language knowledge. Our results show that Continuous Rating is easy and reliable SST quality assessment if the judges have at least limited knowledge of the source language. Our study indicates users' preferences on subtitle layout and presentation style and, most importantly, provides a significant evidence that users with advanced source language knowledge prefer low latency over fewer re-translations. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2637039340</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2637039340</sourcerecordid><originalsourceid>FETCH-proquest_journals_26370393403</originalsourceid><addsrcrecordid>eNqNi80KgkAURocgSMp3GGgtTHP9qbUYbkv3co2xlPGOOU7P3xQ9QKvvwDnfigUS4BAdYyk3LLR2EELINJNJAgG75IaWnpxxll_R052jJ6V7bLXipRuRePFC7bw0xE3Hq350ekFSn081KXV78HpGsvqb7Ni6Q21V-Nst25-LOi-jaTZPp-zSDMbN5FUjU8gEnCAW8F_1Bsx0P6Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2637039340</pqid></control><display><type>article</type><title>Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation</title><source>ProQuest Publicly Available Content database</source><creator>Javorský, Dávid ; Macháček, Dominik ; Bojar, Ondřej</creator><creatorcontrib>Javorský, Dávid ; Macháček, Dominik ; Bojar, Ondřej</creatorcontrib><description>Simultaneous speech translation (SST) can be evaluated on simulated online events where human evaluators watch subtitled videos and continuously express their satisfaction by pressing buttons (so called Continuous Rating). Continuous Rating is easy to collect, but little is known about its reliability, or relation to comprehension of foreign language document by SST users. In this paper, we contrast Continuous Rating with factual questionnaires on judges with different levels of source language knowledge. Our results show that Continuous Rating is easy and reliable SST quality assessment if the judges have at least limited knowledge of the source language. Our study indicates users' preferences on subtitle layout and presentation style and, most importantly, provides a significant evidence that users with advanced source language knowledge prefer low latency over fewer re-translations.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Flicker ; Machine translation ; Subtitles & subtitling ; Translating</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2637039340?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Javorský, Dávid</creatorcontrib><creatorcontrib>Macháček, Dominik</creatorcontrib><creatorcontrib>Bojar, Ondřej</creatorcontrib><title>Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation</title><title>arXiv.org</title><description>Simultaneous speech translation (SST) can be evaluated on simulated online events where human evaluators watch subtitled videos and continuously express their satisfaction by pressing buttons (so called Continuous Rating). Continuous Rating is easy to collect, but little is known about its reliability, or relation to comprehension of foreign language document by SST users. In this paper, we contrast Continuous Rating with factual questionnaires on judges with different levels of source language knowledge. Our results show that Continuous Rating is easy and reliable SST quality assessment if the judges have at least limited knowledge of the source language. Our study indicates users' preferences on subtitle layout and presentation style and, most importantly, provides a significant evidence that users with advanced source language knowledge prefer low latency over fewer re-translations.</description><subject>Flicker</subject><subject>Machine translation</subject><subject>Subtitles & subtitling</subject><subject>Translating</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi80KgkAURocgSMp3GGgtTHP9qbUYbkv3co2xlPGOOU7P3xQ9QKvvwDnfigUS4BAdYyk3LLR2EELINJNJAgG75IaWnpxxll_R052jJ6V7bLXipRuRePFC7bw0xE3Hq350ekFSn081KXV78HpGsvqb7Ni6Q21V-Nst25-LOi-jaTZPp-zSDMbN5FUjU8gEnCAW8F_1Bsx0P6Y</recordid><startdate>20241114</startdate><enddate>20241114</enddate><creator>Javorský, Dávid</creator><creator>Macháček, Dominik</creator><creator>Bojar, Ondřej</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241114</creationdate><title>Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation</title><author>Javorský, Dávid ; Macháček, Dominik ; Bojar, Ondřej</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26370393403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Flicker</topic><topic>Machine translation</topic><topic>Subtitles & subtitling</topic><topic>Translating</topic><toplevel>online_resources</toplevel><creatorcontrib>Javorský, Dávid</creatorcontrib><creatorcontrib>Macháček, Dominik</creatorcontrib><creatorcontrib>Bojar, Ondřej</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Publicly Available Content database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Javorský, Dávid</au><au>Macháček, Dominik</au><au>Bojar, Ondřej</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation</atitle><jtitle>arXiv.org</jtitle><date>2024-11-14</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Simultaneous speech translation (SST) can be evaluated on simulated online events where human evaluators watch subtitled videos and continuously express their satisfaction by pressing buttons (so called Continuous Rating). Continuous Rating is easy to collect, but little is known about its reliability, or relation to comprehension of foreign language document by SST users. In this paper, we contrast Continuous Rating with factual questionnaires on judges with different levels of source language knowledge. Our results show that Continuous Rating is easy and reliable SST quality assessment if the judges have at least limited knowledge of the source language. Our study indicates users' preferences on subtitle layout and presentation style and, most importantly, provides a significant evidence that users with advanced source language knowledge prefer low latency over fewer re-translations.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2637039340 |
source | ProQuest Publicly Available Content database |
subjects | Flicker Machine translation Subtitles & subtitling Translating |
title | Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T13%3A35%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Continuous%20Rating%20as%20Reliable%20Human%20Evaluation%20of%20Simultaneous%20Speech%20Translation&rft.jtitle=arXiv.org&rft.au=Javorsk%C3%BD,%20D%C3%A1vid&rft.date=2024-11-14&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2637039340%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26370393403%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2637039340&rft_id=info:pmid/&rfr_iscdi=true |