Loading…

Deepfakes Detection with Automatic Face Weighting

Altered and manipulated multimedia is increasingly present and widely distributed via social media platforms. Advanced video manipulation tools enable the generation of highly realistic-looking altered multimedia. While many methods have been presented to detect manipulations, most of them fail when...

Full description

Saved in:
Bibliographic Details
Main Authors: Montserrat, Daniel Mas, Hao, Hanxiang, Yarlagadda, S. K., Baireddy, Sriram, Shao, Ruiting, Horvath, Janos, Bartusiak, Emily, Yang, Justin, Guera, David, Zhu, Fengqing, Delp, Edward J.
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c255t-3c958ab5af1f8b5bd2a314a3747351199c458b5785a8d312615080fa7e3dfb593
cites
container_end_page 2859
container_issue
container_start_page 2851
container_title
container_volume
creator Montserrat, Daniel Mas
Hao, Hanxiang
Yarlagadda, S. K.
Baireddy, Sriram
Shao, Ruiting
Horvath, Janos
Bartusiak, Emily
Yang, Justin
Guera, David
Zhu, Fengqing
Delp, Edward J.
description Altered and manipulated multimedia is increasingly present and widely distributed via social media platforms. Advanced video manipulation tools enable the generation of highly realistic-looking altered multimedia. While many methods have been presented to detect manipulations, most of them fail when evaluated with data outside of the datasets used in research environments. In order to address this problem, the Deepfake Detection Challenge (DFDC) provides a large dataset of videos containing realistic manipulations and an evaluation system that ensures that methods work quickly and accurately, even when faced with challenging data. In this paper, we introduce a method based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that extracts visual and temporal features from faces present in videos to accurately detect manipulations. The method is evaluated with the DFDC dataset, providing competitive results compared to other techniques.
doi_str_mv 10.1109/CVPRW50498.2020.00342
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9150724</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9150724</ieee_id><sourcerecordid>9150724</sourcerecordid><originalsourceid>FETCH-LOGICAL-c255t-3c958ab5af1f8b5bd2a314a3747351199c458b5785a8d312615080fa7e3dfb593</originalsourceid><addsrcrecordid>eNotjt1KAzEQRqMgWGufQIR9gV1nkswmuSxbq0JBEbWXJZudtFH7Qzcivr0LevXB4XD4hLhGqBDB3TRvT89LAu1sJUFCBaC0PBEXaKRFp2qgUzGSWENpCOtzMen7dwBAsEROjQTOmA_Rf3BfzDhzyGm_K75T3hTTr7zf-pxCMfeBiyWn9San3fpSnEX_2fPkf8fidX770tyXi8e7h2a6KIMkyqUKjqxvyUeMtqW2k16h9spoowjRuaBp4MaSt51CWSOBhegNqy62w7exuPrrJmZeHY5p648_KzdoRmr1C5TNQ_0</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Deepfakes Detection with Automatic Face Weighting</title><source>IEEE Xplore All Conference Series</source><creator>Montserrat, Daniel Mas ; Hao, Hanxiang ; Yarlagadda, S. K. ; Baireddy, Sriram ; Shao, Ruiting ; Horvath, Janos ; Bartusiak, Emily ; Yang, Justin ; Guera, David ; Zhu, Fengqing ; Delp, Edward J.</creator><creatorcontrib>Montserrat, Daniel Mas ; Hao, Hanxiang ; Yarlagadda, S. K. ; Baireddy, Sriram ; Shao, Ruiting ; Horvath, Janos ; Bartusiak, Emily ; Yang, Justin ; Guera, David ; Zhu, Fengqing ; Delp, Edward J.</creatorcontrib><description>Altered and manipulated multimedia is increasingly present and widely distributed via social media platforms. Advanced video manipulation tools enable the generation of highly realistic-looking altered multimedia. While many methods have been presented to detect manipulations, most of them fail when evaluated with data outside of the datasets used in research environments. In order to address this problem, the Deepfake Detection Challenge (DFDC) provides a large dataset of videos containing realistic manipulations and an evaluation system that ensures that methods work quickly and accurately, even when faced with challenging data. In this paper, we introduce a method based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that extracts visual and temporal features from faces present in videos to accurately detect manipulations. The method is evaluated with the DFDC dataset, providing competitive results compared to other techniques.</description><identifier>EISSN: 2160-7516</identifier><identifier>EISBN: 1728193605</identifier><identifier>EISBN: 9781728193601</identifier><identifier>DOI: 10.1109/CVPRW50498.2020.00342</identifier><language>eng</language><publisher>IEEE</publisher><subject>Face ; Feature extraction ; Recurrent neural networks ; Social network services ; Streaming media ; Training</subject><ispartof>2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, p.2851-2859</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c255t-3c958ab5af1f8b5bd2a314a3747351199c458b5785a8d312615080fa7e3dfb593</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9150724$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9150724$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Montserrat, Daniel Mas</creatorcontrib><creatorcontrib>Hao, Hanxiang</creatorcontrib><creatorcontrib>Yarlagadda, S. K.</creatorcontrib><creatorcontrib>Baireddy, Sriram</creatorcontrib><creatorcontrib>Shao, Ruiting</creatorcontrib><creatorcontrib>Horvath, Janos</creatorcontrib><creatorcontrib>Bartusiak, Emily</creatorcontrib><creatorcontrib>Yang, Justin</creatorcontrib><creatorcontrib>Guera, David</creatorcontrib><creatorcontrib>Zhu, Fengqing</creatorcontrib><creatorcontrib>Delp, Edward J.</creatorcontrib><title>Deepfakes Detection with Automatic Face Weighting</title><title>2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</title><addtitle>CVPRW</addtitle><description>Altered and manipulated multimedia is increasingly present and widely distributed via social media platforms. Advanced video manipulation tools enable the generation of highly realistic-looking altered multimedia. While many methods have been presented to detect manipulations, most of them fail when evaluated with data outside of the datasets used in research environments. In order to address this problem, the Deepfake Detection Challenge (DFDC) provides a large dataset of videos containing realistic manipulations and an evaluation system that ensures that methods work quickly and accurately, even when faced with challenging data. In this paper, we introduce a method based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that extracts visual and temporal features from faces present in videos to accurately detect manipulations. The method is evaluated with the DFDC dataset, providing competitive results compared to other techniques.</description><subject>Face</subject><subject>Feature extraction</subject><subject>Recurrent neural networks</subject><subject>Social network services</subject><subject>Streaming media</subject><subject>Training</subject><issn>2160-7516</issn><isbn>1728193605</isbn><isbn>9781728193601</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjt1KAzEQRqMgWGufQIR9gV1nkswmuSxbq0JBEbWXJZudtFH7Qzcivr0LevXB4XD4hLhGqBDB3TRvT89LAu1sJUFCBaC0PBEXaKRFp2qgUzGSWENpCOtzMen7dwBAsEROjQTOmA_Rf3BfzDhzyGm_K75T3hTTr7zf-pxCMfeBiyWn9San3fpSnEX_2fPkf8fidX770tyXi8e7h2a6KIMkyqUKjqxvyUeMtqW2k16h9spoowjRuaBp4MaSt51CWSOBhegNqy62w7exuPrrJmZeHY5p648_KzdoRmr1C5TNQ_0</recordid><startdate>202006</startdate><enddate>202006</enddate><creator>Montserrat, Daniel Mas</creator><creator>Hao, Hanxiang</creator><creator>Yarlagadda, S. K.</creator><creator>Baireddy, Sriram</creator><creator>Shao, Ruiting</creator><creator>Horvath, Janos</creator><creator>Bartusiak, Emily</creator><creator>Yang, Justin</creator><creator>Guera, David</creator><creator>Zhu, Fengqing</creator><creator>Delp, Edward J.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>202006</creationdate><title>Deepfakes Detection with Automatic Face Weighting</title><author>Montserrat, Daniel Mas ; Hao, Hanxiang ; Yarlagadda, S. K. ; Baireddy, Sriram ; Shao, Ruiting ; Horvath, Janos ; Bartusiak, Emily ; Yang, Justin ; Guera, David ; Zhu, Fengqing ; Delp, Edward J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c255t-3c958ab5af1f8b5bd2a314a3747351199c458b5785a8d312615080fa7e3dfb593</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Face</topic><topic>Feature extraction</topic><topic>Recurrent neural networks</topic><topic>Social network services</topic><topic>Streaming media</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Montserrat, Daniel Mas</creatorcontrib><creatorcontrib>Hao, Hanxiang</creatorcontrib><creatorcontrib>Yarlagadda, S. K.</creatorcontrib><creatorcontrib>Baireddy, Sriram</creatorcontrib><creatorcontrib>Shao, Ruiting</creatorcontrib><creatorcontrib>Horvath, Janos</creatorcontrib><creatorcontrib>Bartusiak, Emily</creatorcontrib><creatorcontrib>Yang, Justin</creatorcontrib><creatorcontrib>Guera, David</creatorcontrib><creatorcontrib>Zhu, Fengqing</creatorcontrib><creatorcontrib>Delp, Edward J.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Montserrat, Daniel Mas</au><au>Hao, Hanxiang</au><au>Yarlagadda, S. K.</au><au>Baireddy, Sriram</au><au>Shao, Ruiting</au><au>Horvath, Janos</au><au>Bartusiak, Emily</au><au>Yang, Justin</au><au>Guera, David</au><au>Zhu, Fengqing</au><au>Delp, Edward J.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Deepfakes Detection with Automatic Face Weighting</atitle><btitle>2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</btitle><stitle>CVPRW</stitle><date>2020-06</date><risdate>2020</risdate><spage>2851</spage><epage>2859</epage><pages>2851-2859</pages><eissn>2160-7516</eissn><eisbn>1728193605</eisbn><eisbn>9781728193601</eisbn><abstract>Altered and manipulated multimedia is increasingly present and widely distributed via social media platforms. Advanced video manipulation tools enable the generation of highly realistic-looking altered multimedia. While many methods have been presented to detect manipulations, most of them fail when evaluated with data outside of the datasets used in research environments. In order to address this problem, the Deepfake Detection Challenge (DFDC) provides a large dataset of videos containing realistic manipulations and an evaluation system that ensures that methods work quickly and accurately, even when faced with challenging data. In this paper, we introduce a method based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that extracts visual and temporal features from faces present in videos to accurately detect manipulations. The method is evaluated with the DFDC dataset, providing competitive results compared to other techniques.</abstract><pub>IEEE</pub><doi>10.1109/CVPRW50498.2020.00342</doi><tpages>9</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2160-7516
ispartof 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, p.2851-2859
issn 2160-7516
language eng
recordid cdi_ieee_primary_9150724
source IEEE Xplore All Conference Series
subjects Face
Feature extraction
Recurrent neural networks
Social network services
Streaming media
Training
title Deepfakes Detection with Automatic Face Weighting
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T05%3A10%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Deepfakes%20Detection%20with%20Automatic%20Face%20Weighting&rft.btitle=2020%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20Workshops%20(CVPRW)&rft.au=Montserrat,%20Daniel%20Mas&rft.date=2020-06&rft.spage=2851&rft.epage=2859&rft.pages=2851-2859&rft.eissn=2160-7516&rft_id=info:doi/10.1109/CVPRW50498.2020.00342&rft.eisbn=1728193605&rft.eisbn_list=9781728193601&rft_dat=%3Cieee_CHZPO%3E9150724%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c255t-3c958ab5af1f8b5bd2a314a3747351199c458b5785a8d312615080fa7e3dfb593%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9150724&rfr_iscdi=true