Loading…

ProActive DeepFake Detection using GAN-based Visible Watermarking

With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and fails in blocking the disinformation spre...

Full description

Saved in:
Bibliographic Details
Published in:ACM transactions on multimedia computing communications and applications 2024-11, Vol.20 (11), p.1-27, Article 344
Main Authors: Nadimpalli, Aakash Varma, Rattani, Ajita
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-a1581-4c6dd58d6d04baeaedbc340c5642adbee234d479273810a52cb51cfb3dd099703
cites cdi_FETCH-LOGICAL-a1581-4c6dd58d6d04baeaedbc340c5642adbee234d479273810a52cb51cfb3dd099703
container_end_page 27
container_issue 11
container_start_page 1
container_title ACM transactions on multimedia computing communications and applications
container_volume 20
creator Nadimpalli, Aakash Varma
Rattani, Ajita
description With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and fails in blocking the disinformation spread in advance. Alternatively, precautions such as adding perturbations to the real data for unnatural distorted DeepFake output easily spotted by the human eyes are introduced as proactive defenses. Recent studies suggest that these existing proactive defenses can be easily bypassed by employing simple image transformation and reconstruction techniques when applied to the perturbed real data and the distorted output, respectively. The aim of this article is to propose a novel proactive DeepFake detection technique using GAN-based visible watermarking. To this front, we propose a reconstructive regularization added to the GAN’s loss function that embeds a unique watermark to the assigned location of the generated fake image. Thorough experiments on multiple datasets confirm the viability of the proposed approach as a proactive defense mechanism against DeepFakes from the perspective of detection by human eyes. Thus, our proposed watermark-based GANs prevent the abuse of the pretrained GANs and smartphone apps, available via online repositories, for DeepFake creation for malicious purposes. Further, the watermarked DeepFakes can also be detected by the SOTA DeepFake detectors. This is critical for applications where automatic DeepFake detectors are used for mass audits due to the huge cost associated with human observers examining a large amount of data manually.
doi_str_mv 10.1145/3625547
format article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3625547</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3625547</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1581-4c6dd58d6d04baeaedbc340c5642adbee234d479273810a52cb51cfb3dd099703</originalsourceid><addsrcrecordid>eNo9j0tPwzAQhC0EEqUg7px84xSwY6-THKNCC1IFHHgco7W9QaaPVHZA4t-TqqWnGc18Wu0wdinFjZQabpXJAXRxxEYSQGamNHB88FCcsrOUvoRQBrQZsfoldrXrww_xO6LNFBdb09MQdWv-ncL6k8_qp8xiIs_fQwp2SfwDe4orjIuhPmcnLS4TXex1zN6m96-Th2z-PHuc1PMMJZQy0854D6U3XmiLhOStU1o4MDpHb4lypb0uqrxQpRQIubMgXWuV96KqCqHG7Hp318UupUhts4lh-OG3kaLZLm_2ywfyakeiWx2g__IPAZ1ShQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ProActive DeepFake Detection using GAN-based Visible Watermarking</title><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><creator>Nadimpalli, Aakash Varma ; Rattani, Ajita</creator><creatorcontrib>Nadimpalli, Aakash Varma ; Rattani, Ajita</creatorcontrib><description>With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and fails in blocking the disinformation spread in advance. Alternatively, precautions such as adding perturbations to the real data for unnatural distorted DeepFake output easily spotted by the human eyes are introduced as proactive defenses. Recent studies suggest that these existing proactive defenses can be easily bypassed by employing simple image transformation and reconstruction techniques when applied to the perturbed real data and the distorted output, respectively. The aim of this article is to propose a novel proactive DeepFake detection technique using GAN-based visible watermarking. To this front, we propose a reconstructive regularization added to the GAN’s loss function that embeds a unique watermark to the assigned location of the generated fake image. Thorough experiments on multiple datasets confirm the viability of the proposed approach as a proactive defense mechanism against DeepFakes from the perspective of detection by human eyes. Thus, our proposed watermark-based GANs prevent the abuse of the pretrained GANs and smartphone apps, available via online repositories, for DeepFake creation for malicious purposes. Further, the watermarked DeepFakes can also be detected by the SOTA DeepFake detectors. This is critical for applications where automatic DeepFake detectors are used for mass audits due to the huge cost associated with human observers examining a large amount of data manually.</description><identifier>ISSN: 1551-6857</identifier><identifier>EISSN: 1551-6865</identifier><identifier>DOI: 10.1145/3625547</identifier><language>eng</language><publisher>New York, NY: ACM</publisher><subject>Computer vision ; Computing methodologies ; Image manipulation ; Security and privacy ; Social aspects of security and privacy</subject><ispartof>ACM transactions on multimedia computing communications and applications, 2024-11, Vol.20 (11), p.1-27, Article 344</ispartof><rights>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1581-4c6dd58d6d04baeaedbc340c5642adbee234d479273810a52cb51cfb3dd099703</citedby><cites>FETCH-LOGICAL-a1581-4c6dd58d6d04baeaedbc340c5642adbee234d479273810a52cb51cfb3dd099703</cites><orcidid>0000-0002-1541-8202 ; 0009-0003-7254-3207</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids></links><search><creatorcontrib>Nadimpalli, Aakash Varma</creatorcontrib><creatorcontrib>Rattani, Ajita</creatorcontrib><title>ProActive DeepFake Detection using GAN-based Visible Watermarking</title><title>ACM transactions on multimedia computing communications and applications</title><addtitle>ACM TOMM</addtitle><description>With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and fails in blocking the disinformation spread in advance. Alternatively, precautions such as adding perturbations to the real data for unnatural distorted DeepFake output easily spotted by the human eyes are introduced as proactive defenses. Recent studies suggest that these existing proactive defenses can be easily bypassed by employing simple image transformation and reconstruction techniques when applied to the perturbed real data and the distorted output, respectively. The aim of this article is to propose a novel proactive DeepFake detection technique using GAN-based visible watermarking. To this front, we propose a reconstructive regularization added to the GAN’s loss function that embeds a unique watermark to the assigned location of the generated fake image. Thorough experiments on multiple datasets confirm the viability of the proposed approach as a proactive defense mechanism against DeepFakes from the perspective of detection by human eyes. Thus, our proposed watermark-based GANs prevent the abuse of the pretrained GANs and smartphone apps, available via online repositories, for DeepFake creation for malicious purposes. Further, the watermarked DeepFakes can also be detected by the SOTA DeepFake detectors. This is critical for applications where automatic DeepFake detectors are used for mass audits due to the huge cost associated with human observers examining a large amount of data manually.</description><subject>Computer vision</subject><subject>Computing methodologies</subject><subject>Image manipulation</subject><subject>Security and privacy</subject><subject>Social aspects of security and privacy</subject><issn>1551-6857</issn><issn>1551-6865</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNo9j0tPwzAQhC0EEqUg7px84xSwY6-THKNCC1IFHHgco7W9QaaPVHZA4t-TqqWnGc18Wu0wdinFjZQabpXJAXRxxEYSQGamNHB88FCcsrOUvoRQBrQZsfoldrXrww_xO6LNFBdb09MQdWv-ncL6k8_qp8xiIs_fQwp2SfwDe4orjIuhPmcnLS4TXex1zN6m96-Th2z-PHuc1PMMJZQy0854D6U3XmiLhOStU1o4MDpHb4lypb0uqrxQpRQIubMgXWuV96KqCqHG7Hp318UupUhts4lh-OG3kaLZLm_2ywfyakeiWx2g__IPAZ1ShQ</recordid><startdate>20241130</startdate><enddate>20241130</enddate><creator>Nadimpalli, Aakash Varma</creator><creator>Rattani, Ajita</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-1541-8202</orcidid><orcidid>https://orcid.org/0009-0003-7254-3207</orcidid></search><sort><creationdate>20241130</creationdate><title>ProActive DeepFake Detection using GAN-based Visible Watermarking</title><author>Nadimpalli, Aakash Varma ; Rattani, Ajita</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1581-4c6dd58d6d04baeaedbc340c5642adbee234d479273810a52cb51cfb3dd099703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer vision</topic><topic>Computing methodologies</topic><topic>Image manipulation</topic><topic>Security and privacy</topic><topic>Social aspects of security and privacy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nadimpalli, Aakash Varma</creatorcontrib><creatorcontrib>Rattani, Ajita</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on multimedia computing communications and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nadimpalli, Aakash Varma</au><au>Rattani, Ajita</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ProActive DeepFake Detection using GAN-based Visible Watermarking</atitle><jtitle>ACM transactions on multimedia computing communications and applications</jtitle><stitle>ACM TOMM</stitle><date>2024-11-30</date><risdate>2024</risdate><volume>20</volume><issue>11</issue><spage>1</spage><epage>27</epage><pages>1-27</pages><artnum>344</artnum><issn>1551-6857</issn><eissn>1551-6865</eissn><abstract>With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and fails in blocking the disinformation spread in advance. Alternatively, precautions such as adding perturbations to the real data for unnatural distorted DeepFake output easily spotted by the human eyes are introduced as proactive defenses. Recent studies suggest that these existing proactive defenses can be easily bypassed by employing simple image transformation and reconstruction techniques when applied to the perturbed real data and the distorted output, respectively. The aim of this article is to propose a novel proactive DeepFake detection technique using GAN-based visible watermarking. To this front, we propose a reconstructive regularization added to the GAN’s loss function that embeds a unique watermark to the assigned location of the generated fake image. Thorough experiments on multiple datasets confirm the viability of the proposed approach as a proactive defense mechanism against DeepFakes from the perspective of detection by human eyes. Thus, our proposed watermark-based GANs prevent the abuse of the pretrained GANs and smartphone apps, available via online repositories, for DeepFake creation for malicious purposes. Further, the watermarked DeepFakes can also be detected by the SOTA DeepFake detectors. This is critical for applications where automatic DeepFake detectors are used for mass audits due to the huge cost associated with human observers examining a large amount of data manually.</abstract><cop>New York, NY</cop><pub>ACM</pub><doi>10.1145/3625547</doi><tpages>27</tpages><orcidid>https://orcid.org/0000-0002-1541-8202</orcidid><orcidid>https://orcid.org/0009-0003-7254-3207</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1551-6857
ispartof ACM transactions on multimedia computing communications and applications, 2024-11, Vol.20 (11), p.1-27, Article 344
issn 1551-6857
1551-6865
language eng
recordid cdi_crossref_primary_10_1145_3625547
source Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)
subjects Computer vision
Computing methodologies
Image manipulation
Security and privacy
Social aspects of security and privacy
title ProActive DeepFake Detection using GAN-based Visible Watermarking
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T19%3A56%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ProActive%20DeepFake%20Detection%20using%20GAN-based%20Visible%20Watermarking&rft.jtitle=ACM%20transactions%20on%20multimedia%20computing%20communications%20and%20applications&rft.au=Nadimpalli,%20Aakash%20Varma&rft.date=2024-11-30&rft.volume=20&rft.issue=11&rft.spage=1&rft.epage=27&rft.pages=1-27&rft.artnum=344&rft.issn=1551-6857&rft.eissn=1551-6865&rft_id=info:doi/10.1145/3625547&rft_dat=%3Cacm_cross%3E3625547%3C/acm_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a1581-4c6dd58d6d04baeaedbc340c5642adbee234d479273810a52cb51cfb3dd099703%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true