Loading…
Evaluation of Automatic Video Captioning Using Direct Assessment
We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure...
Saved in:
Published in: | arXiv.org 2017-10 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Graham, Yvette Awad, George Smeaton, Alan |
description | We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Automatic metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowdsourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and should scale to where there many caption-generation techniques to be evaluated. |
doi_str_mv | 10.48550/arxiv.1710.10586 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2076303094</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2076303094</sourcerecordid><originalsourceid>FETCH-LOGICAL-a524-23d55cbfbae897146c0464f0ccb6e5d544669345b1cea040199cfee012d219863</originalsourceid><addsrcrecordid>eNotjc1qwzAQhEWh0JDmAXoT9Ox0Je3K1q3GTX8g0Evaa5DldXFIrNSyQx-_Ce1lhvkGZoS4U7DEggge_PDTnZYqPwMFVNgrMdPGqKxArW_EIqUdAGibayIzE4-rk99PfuxiL2Mry2mMh3MK8rNrOMrKHy9V13_Jj3TRp27gMMoyJU7pwP14K65bv0-8-Pe52DyvNtVrtn5_eavKdeZJY6ZNQxTqtvZcuFyhDYAWWwihtkwNIVrrDFKtAntAUM6FlhmUbrRyhTVzcf83exzi98Rp3O7iNPTnx62G3Bow4ND8AlfeSlk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2076303094</pqid></control><display><type>article</type><title>Evaluation of Automatic Video Captioning Using Direct Assessment</title><source>ProQuest - Publicly Available Content Database</source><creator>Graham, Yvette ; Awad, George ; Smeaton, Alan</creator><creatorcontrib>Graham, Yvette ; Awad, George ; Smeaton, Alan</creatorcontrib><description>We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Automatic metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowdsourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and should scale to where there many caption-generation techniques to be evaluated.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1710.10586</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ground truth ; Machine translation ; Quality assessment</subject><ispartof>arXiv.org, 2017-10</ispartof><rights>2017. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2076303094?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,27923,37010,44588</link.rule.ids></links><search><creatorcontrib>Graham, Yvette</creatorcontrib><creatorcontrib>Awad, George</creatorcontrib><creatorcontrib>Smeaton, Alan</creatorcontrib><title>Evaluation of Automatic Video Captioning Using Direct Assessment</title><title>arXiv.org</title><description>We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Automatic metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowdsourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and should scale to where there many caption-generation techniques to be evaluated.</description><subject>Ground truth</subject><subject>Machine translation</subject><subject>Quality assessment</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjc1qwzAQhEWh0JDmAXoT9Ox0Je3K1q3GTX8g0Evaa5DldXFIrNSyQx-_Ce1lhvkGZoS4U7DEggge_PDTnZYqPwMFVNgrMdPGqKxArW_EIqUdAGibayIzE4-rk99PfuxiL2Mry2mMh3MK8rNrOMrKHy9V13_Jj3TRp27gMMoyJU7pwP14K65bv0-8-Pe52DyvNtVrtn5_eavKdeZJY6ZNQxTqtvZcuFyhDYAWWwihtkwNIVrrDFKtAntAUM6FlhmUbrRyhTVzcf83exzi98Rp3O7iNPTnx62G3Bow4ND8AlfeSlk</recordid><startdate>20171029</startdate><enddate>20171029</enddate><creator>Graham, Yvette</creator><creator>Awad, George</creator><creator>Smeaton, Alan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20171029</creationdate><title>Evaluation of Automatic Video Captioning Using Direct Assessment</title><author>Graham, Yvette ; Awad, George ; Smeaton, Alan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a524-23d55cbfbae897146c0464f0ccb6e5d544669345b1cea040199cfee012d219863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Ground truth</topic><topic>Machine translation</topic><topic>Quality assessment</topic><toplevel>online_resources</toplevel><creatorcontrib>Graham, Yvette</creatorcontrib><creatorcontrib>Awad, George</creatorcontrib><creatorcontrib>Smeaton, Alan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Graham, Yvette</au><au>Awad, George</au><au>Smeaton, Alan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluation of Automatic Video Captioning Using Direct Assessment</atitle><jtitle>arXiv.org</jtitle><date>2017-10-29</date><risdate>2017</risdate><eissn>2331-8422</eissn><abstract>We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Automatic metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowdsourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and should scale to where there many caption-generation techniques to be evaluated.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1710.10586</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2017-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2076303094 |
source | ProQuest - Publicly Available Content Database |
subjects | Ground truth Machine translation Quality assessment |
title | Evaluation of Automatic Video Captioning Using Direct Assessment |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T03%3A49%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluation%20of%20Automatic%20Video%20Captioning%20Using%20Direct%20Assessment&rft.jtitle=arXiv.org&rft.au=Graham,%20Yvette&rft.date=2017-10-29&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1710.10586&rft_dat=%3Cproquest%3E2076303094%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a524-23d55cbfbae897146c0464f0ccb6e5d544669345b1cea040199cfee012d219863%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2076303094&rft_id=info:pmid/&rfr_iscdi=true |