Loading…
Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms
Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campa...
Saved in:
Published in: | Proceedings of the National Academy of Sciences - PNAS 2022-11, Vol.119 (48), p.1-3 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c3589-ebbbd27cfc84446f77164b3f2937320a085563bb4bfd538456b6791d37022ebb3 |
---|---|
cites | cdi_FETCH-LOGICAL-c3589-ebbbd27cfc84446f77164b3f2937320a085563bb4bfd538456b6791d37022ebb3 |
container_end_page | 3 |
container_issue | 48 |
container_start_page | 1 |
container_title | Proceedings of the National Academy of Sciences - PNAS |
container_volume | 119 |
creator | Boháček, Matyáš Farid, Hany |
description | Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep-fake imposters. Trained on several hours of authentic video, this approach captures distinct facial, gestural, and vocal mannerisms that we show can distinguish a world leader from an impersonator or deep-fake imposter. |
doi_str_mv | 10.1073/pnas.2216035119 |
format | article |
fullrecord | <record><control><sourceid>jstor_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9860138</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>27208584</jstor_id><sourcerecordid>27208584</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3589-ebbbd27cfc84446f77164b3f2937320a085563bb4bfd538456b6791d37022ebb3</originalsourceid><addsrcrecordid>eNpdkctv1DAQxi1ERZe2Z04gS1w4kHb8iB8XJFTxkiq1hyJxs2zHWbIk9mInRfz3eLXtAj3NSPObT_PNh9ALAucEJLvYRlvOKSUCWEuIfoJWBDRpBNfwFK0AqGwUp_wYPS9lAwC6VfAMHTPBieScrtC3m5zm4OchrvGvlMcOj8F2IRds13aIZcZdCFvc2x-h4KXssN76wY5v8TqUecm7zsYO3yVvRzzZGEMeylRO0VFvxxLO7usJ-vrxw-3l5-bq-tOXy_dXjWet0k1wznVU-t4rzrnopSSCO9ZTzSSjYEG1rWDOcdd3LVO8FU5ITTomgdK6zE7Qu73udnFT6HyIc73JbPMw2fzbJDuY_ydx-G7W6c5oJYAwVQXe3Avk9HOpnsw0FB_G0caQlmKoZJozSimv6OtH6CYtOVZ7leItcEGoqNTFnvI5lZJDfziGgNmlZnapmb-p1Y1X_3o48A8xVeDlHtiUOeXDnEpa_6M4-wMICJ0z</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2745046126</pqid></control><display><type>article</type><title>Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms</title><source>Open Access: PubMed Central</source><creator>Boháček, Matyáš ; Farid, Hany</creator><creatorcontrib>Boháček, Matyáš ; Farid, Hany</creatorcontrib><description>Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep-fake imposters. Trained on several hours of authentic video, this approach captures distinct facial, gestural, and vocal mannerisms that we show can distinguish a world leader from an impersonator or deep-fake imposter.</description><identifier>ISSN: 0027-8424</identifier><identifier>EISSN: 1091-6490</identifier><identifier>DOI: 10.1073/pnas.2216035119</identifier><identifier>PMID: 36417442</identifier><language>eng</language><publisher>United States: National Academy of Sciences</publisher><subject>Artificial Intelligence ; BRIEF REPORTS ; Deception ; Elections ; Fraud ; Gestures ; Physical Sciences ; Pornography</subject><ispartof>Proceedings of the National Academy of Sciences - PNAS, 2022-11, Vol.119 (48), p.1-3</ispartof><rights>Copyright © 2022 the Author(s)</rights><rights>Copyright National Academy of Sciences Nov 29, 2022</rights><rights>Copyright © 2022 the Author(s). Published by PNAS. 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3589-ebbbd27cfc84446f77164b3f2937320a085563bb4bfd538456b6791d37022ebb3</citedby><cites>FETCH-LOGICAL-c3589-ebbbd27cfc84446f77164b3f2937320a085563bb4bfd538456b6791d37022ebb3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9860138/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9860138/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,27922,27923,53789,53791</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36417442$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Boháček, Matyáš</creatorcontrib><creatorcontrib>Farid, Hany</creatorcontrib><title>Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms</title><title>Proceedings of the National Academy of Sciences - PNAS</title><addtitle>Proc Natl Acad Sci U S A</addtitle><description>Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep-fake imposters. Trained on several hours of authentic video, this approach captures distinct facial, gestural, and vocal mannerisms that we show can distinguish a world leader from an impersonator or deep-fake imposter.</description><subject>Artificial Intelligence</subject><subject>BRIEF REPORTS</subject><subject>Deception</subject><subject>Elections</subject><subject>Fraud</subject><subject>Gestures</subject><subject>Physical Sciences</subject><subject>Pornography</subject><issn>0027-8424</issn><issn>1091-6490</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpdkctv1DAQxi1ERZe2Z04gS1w4kHb8iB8XJFTxkiq1hyJxs2zHWbIk9mInRfz3eLXtAj3NSPObT_PNh9ALAucEJLvYRlvOKSUCWEuIfoJWBDRpBNfwFK0AqGwUp_wYPS9lAwC6VfAMHTPBieScrtC3m5zm4OchrvGvlMcOj8F2IRds13aIZcZdCFvc2x-h4KXssN76wY5v8TqUecm7zsYO3yVvRzzZGEMeylRO0VFvxxLO7usJ-vrxw-3l5-bq-tOXy_dXjWet0k1wznVU-t4rzrnopSSCO9ZTzSSjYEG1rWDOcdd3LVO8FU5ITTomgdK6zE7Qu73udnFT6HyIc73JbPMw2fzbJDuY_ydx-G7W6c5oJYAwVQXe3Avk9HOpnsw0FB_G0caQlmKoZJozSimv6OtH6CYtOVZ7leItcEGoqNTFnvI5lZJDfziGgNmlZnapmb-p1Y1X_3o48A8xVeDlHtiUOeXDnEpa_6M4-wMICJ0z</recordid><startdate>20221129</startdate><enddate>20221129</enddate><creator>Boháček, Matyáš</creator><creator>Farid, Hany</creator><general>National Academy of Sciences</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QG</scope><scope>7QL</scope><scope>7QP</scope><scope>7QR</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TK</scope><scope>7TM</scope><scope>7TO</scope><scope>7U9</scope><scope>8FD</scope><scope>C1K</scope><scope>FR3</scope><scope>H94</scope><scope>M7N</scope><scope>P64</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20221129</creationdate><title>Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms</title><author>Boháček, Matyáš ; Farid, Hany</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3589-ebbbd27cfc84446f77164b3f2937320a085563bb4bfd538456b6791d37022ebb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>BRIEF REPORTS</topic><topic>Deception</topic><topic>Elections</topic><topic>Fraud</topic><topic>Gestures</topic><topic>Physical Sciences</topic><topic>Pornography</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Boháček, Matyáš</creatorcontrib><creatorcontrib>Farid, Hany</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Oncogenes and Growth Factors Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>Engineering Research Database</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Proceedings of the National Academy of Sciences - PNAS</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Boháček, Matyáš</au><au>Farid, Hany</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms</atitle><jtitle>Proceedings of the National Academy of Sciences - PNAS</jtitle><addtitle>Proc Natl Acad Sci U S A</addtitle><date>2022-11-29</date><risdate>2022</risdate><volume>119</volume><issue>48</issue><spage>1</spage><epage>3</epage><pages>1-3</pages><issn>0027-8424</issn><eissn>1091-6490</eissn><abstract>Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep-fake imposters. Trained on several hours of authentic video, this approach captures distinct facial, gestural, and vocal mannerisms that we show can distinguish a world leader from an impersonator or deep-fake imposter.</abstract><cop>United States</cop><pub>National Academy of Sciences</pub><pmid>36417442</pmid><doi>10.1073/pnas.2216035119</doi><tpages>3</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0027-8424 |
ispartof | Proceedings of the National Academy of Sciences - PNAS, 2022-11, Vol.119 (48), p.1-3 |
issn | 0027-8424 1091-6490 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9860138 |
source | Open Access: PubMed Central |
subjects | Artificial Intelligence BRIEF REPORTS Deception Elections Fraud Gestures Physical Sciences Pornography |
title | Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T19%3A34%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Protecting%20world%20leaders%20against%20deep%20fakes%20using%20facial,%20gestural,%20and%20vocal%20mannerisms&rft.jtitle=Proceedings%20of%20the%20National%20Academy%20of%20Sciences%20-%20PNAS&rft.au=Boh%C3%A1%C4%8Dek,%20Maty%C3%A1%C5%A1&rft.date=2022-11-29&rft.volume=119&rft.issue=48&rft.spage=1&rft.epage=3&rft.pages=1-3&rft.issn=0027-8424&rft.eissn=1091-6490&rft_id=info:doi/10.1073/pnas.2216035119&rft_dat=%3Cjstor_pubme%3E27208584%3C/jstor_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3589-ebbbd27cfc84446f77164b3f2937320a085563bb4bfd538456b6791d37022ebb3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2745046126&rft_id=info:pmid/36417442&rft_jstor_id=27208584&rfr_iscdi=true |