Loading…

Efficient Monotonic Multihead Attention

We introduce the Efficient Monotonic Multihead Attention (EMMA), a state-of-the-art simultaneous translation model with numerically-stable and unbiased monotonic alignment estimation. In addition, we present improved training and inference strategies, including simultaneous fine-tuning from an offli...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-12
Main Authors: Ma, Xutai, Sun, Anna, Ouyang, Siqi, Inaguma, Hirofumi, Paden Tomasello
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Ma, Xutai
Sun, Anna
Ouyang, Siqi
Inaguma, Hirofumi
Paden Tomasello
description We introduce the Efficient Monotonic Multihead Attention (EMMA), a state-of-the-art simultaneous translation model with numerically-stable and unbiased monotonic alignment estimation. In addition, we present improved training and inference strategies, including simultaneous fine-tuning from an offline translation model and reduction of monotonic alignment variance. The experimental results demonstrate that the proposed model attains state-of-the-art performance in simultaneous speech-to-text translation on the Spanish and English translation task.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2899515801</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2899515801</sourcerecordid><originalsourceid>FETCH-proquest_journals_28995158013</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRQd01Ly0zOTM0rUfDNz8svyc_LTFbwLc0pycxITUxRcCwpAUpl5ufxMLCmJeYUp_JCaW4GZTfXEGcP3YKi_MLS1OKS-Kz80qI8oFS8kYWlpamhqYWBoTFxqgCHCDAC</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2899515801</pqid></control><display><type>article</type><title>Efficient Monotonic Multihead Attention</title><source>Publicly Available Content Database</source><creator>Ma, Xutai ; Sun, Anna ; Ouyang, Siqi ; Inaguma, Hirofumi ; Paden Tomasello</creator><creatorcontrib>Ma, Xutai ; Sun, Anna ; Ouyang, Siqi ; Inaguma, Hirofumi ; Paden Tomasello</creatorcontrib><description>We introduce the Efficient Monotonic Multihead Attention (EMMA), a state-of-the-art simultaneous translation model with numerically-stable and unbiased monotonic alignment estimation. In addition, we present improved training and inference strategies, including simultaneous fine-tuning from an offline translation model and reduction of monotonic alignment variance. The experimental results demonstrate that the proposed model attains state-of-the-art performance in simultaneous speech-to-text translation on the Spanish and English translation task.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Alignment</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2899515801?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Ma, Xutai</creatorcontrib><creatorcontrib>Sun, Anna</creatorcontrib><creatorcontrib>Ouyang, Siqi</creatorcontrib><creatorcontrib>Inaguma, Hirofumi</creatorcontrib><creatorcontrib>Paden Tomasello</creatorcontrib><title>Efficient Monotonic Multihead Attention</title><title>arXiv.org</title><description>We introduce the Efficient Monotonic Multihead Attention (EMMA), a state-of-the-art simultaneous translation model with numerically-stable and unbiased monotonic alignment estimation. In addition, we present improved training and inference strategies, including simultaneous fine-tuning from an offline translation model and reduction of monotonic alignment variance. The experimental results demonstrate that the proposed model attains state-of-the-art performance in simultaneous speech-to-text translation on the Spanish and English translation task.</description><subject>Alignment</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRQd01Ly0zOTM0rUfDNz8svyc_LTFbwLc0pycxITUxRcCwpAUpl5ufxMLCmJeYUp_JCaW4GZTfXEGcP3YKi_MLS1OKS-Kz80qI8oFS8kYWlpamhqYWBoTFxqgCHCDAC</recordid><startdate>20231207</startdate><enddate>20231207</enddate><creator>Ma, Xutai</creator><creator>Sun, Anna</creator><creator>Ouyang, Siqi</creator><creator>Inaguma, Hirofumi</creator><creator>Paden Tomasello</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231207</creationdate><title>Efficient Monotonic Multihead Attention</title><author>Ma, Xutai ; Sun, Anna ; Ouyang, Siqi ; Inaguma, Hirofumi ; Paden Tomasello</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28995158013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Alignment</topic><toplevel>online_resources</toplevel><creatorcontrib>Ma, Xutai</creatorcontrib><creatorcontrib>Sun, Anna</creatorcontrib><creatorcontrib>Ouyang, Siqi</creatorcontrib><creatorcontrib>Inaguma, Hirofumi</creatorcontrib><creatorcontrib>Paden Tomasello</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ma, Xutai</au><au>Sun, Anna</au><au>Ouyang, Siqi</au><au>Inaguma, Hirofumi</au><au>Paden Tomasello</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Efficient Monotonic Multihead Attention</atitle><jtitle>arXiv.org</jtitle><date>2023-12-07</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>We introduce the Efficient Monotonic Multihead Attention (EMMA), a state-of-the-art simultaneous translation model with numerically-stable and unbiased monotonic alignment estimation. In addition, we present improved training and inference strategies, including simultaneous fine-tuning from an offline translation model and reduction of monotonic alignment variance. The experimental results demonstrate that the proposed model attains state-of-the-art performance in simultaneous speech-to-text translation on the Spanish and English translation task.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2899515801
source Publicly Available Content Database
subjects Alignment
title Efficient Monotonic Multihead Attention
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T22%3A15%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Efficient%20Monotonic%20Multihead%20Attention&rft.jtitle=arXiv.org&rft.au=Ma,%20Xutai&rft.date=2023-12-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2899515801%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28995158013%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2899515801&rft_id=info:pmid/&rfr_iscdi=true