Loading…

Natural attack for pre-trained models of code

Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with example...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang, Zhou, Shi, Jieke, He, Junda, Lo, David
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 1493
container_issue
container_start_page 1482
container_title
container_volume
creator Yang, Zhou
Shi, Jieke
He, Junda
Lo, David
description Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement. In this paper, we propose ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs. Different from prior works, this paper considers the natural semantic of generated examples at the same time as preserving the operational semantic of original inputs. Our user study demonstrates that human developers consistently consider that adversarial examples generated by ALERT are more natural than those generated by the state-of-the-art work by Zhang et al. that ignores the naturalness requirement. On attacking CodeBERT, our approach can achieve attack success rates of 53.62%, 27.79%, and 35.78% across three downstream tasks: vulnerability prediction, clone detection and code authorship attribution. On GraphCodeBERT, our approach can achieve average success rates of 76.95%, 7.96% and 61.47% on the three tasks. The above outperforms the baseline by 14.07% and 18.56% on the two pre-trained models on average. Finally, we investigated the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure and demonstrated the accuracy of CodeBERT and GraphCodeBERT against ALERT-generated adversarial examples increased by 87.59% and 92.32%, respectively.
doi_str_mv 10.1145/3510003.3510146
format conference_proceeding
fullrecord <record><control><sourceid>acm_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9794089</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9794089</ieee_id><sourcerecordid>acm_books_10_1145_3510003_3510146</sourcerecordid><originalsourceid>FETCH-LOGICAL-a275t-5f9f67aa5afaf40c479822395de76b71668c07504a4b01e5e2c9f9610556b42b3</originalsourceid><addsrcrecordid>eNqNkDFPwzAQhQ0IiapkZmDxyJLgc3x2PKIKClIFC8zWObGl0IRUThj496RqJiamd9L33g0fYzcgCgCF9yWCEKIsjglKn7HMmmoGorRSApyzFSBWOUiJF3_YFcvG8XNey8qAMWbF8leavhN1nKaJ6j2PQ-KHFPIpUfsVGt4PTehGPkRez9c1u4zUjSFbcs0-nh7fN8_57m37snnY5SQNTjlGG7UhQooUlaiVsZWUpcUmGO0NaF3VwqBQpLyAgEHWNloNAlF7JX25Zrenv20IwR1S21P6cdZYJSo707sTpbp3fhj2owPhjm7c4sYtbuZq8c-q86kNsfwFwWdclw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Natural attack for pre-trained models of code</title><source>IEEE Xplore All Conference Series</source><creator>Yang, Zhou ; Shi, Jieke ; He, Junda ; Lo, David</creator><creatorcontrib>Yang, Zhou ; Shi, Jieke ; He, Junda ; Lo, David</creatorcontrib><description>Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement. In this paper, we propose ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs. Different from prior works, this paper considers the natural semantic of generated examples at the same time as preserving the operational semantic of original inputs. Our user study demonstrates that human developers consistently consider that adversarial examples generated by ALERT are more natural than those generated by the state-of-the-art work by Zhang et al. that ignores the naturalness requirement. On attacking CodeBERT, our approach can achieve attack success rates of 53.62%, 27.79%, and 35.78% across three downstream tasks: vulnerability prediction, clone detection and code authorship attribution. On GraphCodeBERT, our approach can achieve average success rates of 76.95%, 7.96% and 61.47% on the three tasks. The above outperforms the baseline by 14.07% and 18.56% on the two pre-trained models on average. Finally, we investigated the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure and demonstrated the accuracy of CodeBERT and GraphCodeBERT against ALERT-generated adversarial examples increased by 87.59% and 92.32%, respectively.</description><identifier>ISBN: 9781450392211</identifier><identifier>ISBN: 1450392210</identifier><identifier>EISSN: 1558-1225</identifier><identifier>EISBN: 9781450392211</identifier><identifier>EISBN: 1450392210</identifier><identifier>DOI: 10.1145/3510003.3510146</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Adversarial Attack ; Cloning ; Codes ; Computing methodologies -- Machine learning -- Machine learning approaches -- Neural networks ; Genetic Algorithm ; Perturbation methods ; Pre-Trained Models ; Semantics ; Software and its engineering -- Software creation and management -- Search-based software engineering ; Software and its engineering -- Software creation and management -- Software verification and validation -- Software defect analysis -- Software testing and debugging ; Software engineering ; Task analysis ; Transforms</subject><ispartof>2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE), 2022, p.1482-1493</ispartof><rights>2022 ACM</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9794089$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9794089$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yang, Zhou</creatorcontrib><creatorcontrib>Shi, Jieke</creatorcontrib><creatorcontrib>He, Junda</creatorcontrib><creatorcontrib>Lo, David</creatorcontrib><title>Natural attack for pre-trained models of code</title><title>2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE)</title><addtitle>ICSE</addtitle><description>Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement. In this paper, we propose ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs. Different from prior works, this paper considers the natural semantic of generated examples at the same time as preserving the operational semantic of original inputs. Our user study demonstrates that human developers consistently consider that adversarial examples generated by ALERT are more natural than those generated by the state-of-the-art work by Zhang et al. that ignores the naturalness requirement. On attacking CodeBERT, our approach can achieve attack success rates of 53.62%, 27.79%, and 35.78% across three downstream tasks: vulnerability prediction, clone detection and code authorship attribution. On GraphCodeBERT, our approach can achieve average success rates of 76.95%, 7.96% and 61.47% on the three tasks. The above outperforms the baseline by 14.07% and 18.56% on the two pre-trained models on average. Finally, we investigated the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure and demonstrated the accuracy of CodeBERT and GraphCodeBERT against ALERT-generated adversarial examples increased by 87.59% and 92.32%, respectively.</description><subject>Adversarial Attack</subject><subject>Cloning</subject><subject>Codes</subject><subject>Computing methodologies -- Machine learning -- Machine learning approaches -- Neural networks</subject><subject>Genetic Algorithm</subject><subject>Perturbation methods</subject><subject>Pre-Trained Models</subject><subject>Semantics</subject><subject>Software and its engineering -- Software creation and management -- Search-based software engineering</subject><subject>Software and its engineering -- Software creation and management -- Software verification and validation -- Software defect analysis -- Software testing and debugging</subject><subject>Software engineering</subject><subject>Task analysis</subject><subject>Transforms</subject><issn>1558-1225</issn><isbn>9781450392211</isbn><isbn>1450392210</isbn><isbn>9781450392211</isbn><isbn>1450392210</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNqNkDFPwzAQhQ0IiapkZmDxyJLgc3x2PKIKClIFC8zWObGl0IRUThj496RqJiamd9L33g0fYzcgCgCF9yWCEKIsjglKn7HMmmoGorRSApyzFSBWOUiJF3_YFcvG8XNey8qAMWbF8leavhN1nKaJ6j2PQ-KHFPIpUfsVGt4PTehGPkRez9c1u4zUjSFbcs0-nh7fN8_57m37snnY5SQNTjlGG7UhQooUlaiVsZWUpcUmGO0NaF3VwqBQpLyAgEHWNloNAlF7JX25Zrenv20IwR1S21P6cdZYJSo707sTpbp3fhj2owPhjm7c4sYtbuZq8c-q86kNsfwFwWdclw</recordid><startdate>20220521</startdate><enddate>20220521</enddate><creator>Yang, Zhou</creator><creator>Shi, Jieke</creator><creator>He, Junda</creator><creator>Lo, David</creator><general>ACM</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20220521</creationdate><title>Natural attack for pre-trained models of code</title><author>Yang, Zhou ; Shi, Jieke ; He, Junda ; Lo, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a275t-5f9f67aa5afaf40c479822395de76b71668c07504a4b01e5e2c9f9610556b42b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adversarial Attack</topic><topic>Cloning</topic><topic>Codes</topic><topic>Computing methodologies -- Machine learning -- Machine learning approaches -- Neural networks</topic><topic>Genetic Algorithm</topic><topic>Perturbation methods</topic><topic>Pre-Trained Models</topic><topic>Semantics</topic><topic>Software and its engineering -- Software creation and management -- Search-based software engineering</topic><topic>Software and its engineering -- Software creation and management -- Software verification and validation -- Software defect analysis -- Software testing and debugging</topic><topic>Software engineering</topic><topic>Task analysis</topic><topic>Transforms</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Zhou</creatorcontrib><creatorcontrib>Shi, Jieke</creatorcontrib><creatorcontrib>He, Junda</creatorcontrib><creatorcontrib>Lo, David</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Zhou</au><au>Shi, Jieke</au><au>He, Junda</au><au>Lo, David</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Natural attack for pre-trained models of code</atitle><btitle>2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE)</btitle><stitle>ICSE</stitle><date>2022-05-21</date><risdate>2022</risdate><spage>1482</spage><epage>1493</epage><pages>1482-1493</pages><eissn>1558-1225</eissn><isbn>9781450392211</isbn><isbn>1450392210</isbn><eisbn>9781450392211</eisbn><eisbn>1450392210</eisbn><coden>IEEPAD</coden><abstract>Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement. In this paper, we propose ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs. Different from prior works, this paper considers the natural semantic of generated examples at the same time as preserving the operational semantic of original inputs. Our user study demonstrates that human developers consistently consider that adversarial examples generated by ALERT are more natural than those generated by the state-of-the-art work by Zhang et al. that ignores the naturalness requirement. On attacking CodeBERT, our approach can achieve attack success rates of 53.62%, 27.79%, and 35.78% across three downstream tasks: vulnerability prediction, clone detection and code authorship attribution. On GraphCodeBERT, our approach can achieve average success rates of 76.95%, 7.96% and 61.47% on the three tasks. The above outperforms the baseline by 14.07% and 18.56% on the two pre-trained models on average. Finally, we investigated the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure and demonstrated the accuracy of CodeBERT and GraphCodeBERT against ALERT-generated adversarial examples increased by 87.59% and 92.32%, respectively.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3510003.3510146</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 9781450392211
ispartof 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE), 2022, p.1482-1493
issn 1558-1225
language eng
recordid cdi_ieee_primary_9794089
source IEEE Xplore All Conference Series
subjects Adversarial Attack
Cloning
Codes
Computing methodologies -- Machine learning -- Machine learning approaches -- Neural networks
Genetic Algorithm
Perturbation methods
Pre-Trained Models
Semantics
Software and its engineering -- Software creation and management -- Search-based software engineering
Software and its engineering -- Software creation and management -- Software verification and validation -- Software defect analysis -- Software testing and debugging
Software engineering
Task analysis
Transforms
title Natural attack for pre-trained models of code
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T22%3A09%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Natural%20attack%20for%20pre-trained%20models%20of%20code&rft.btitle=2022%20IEEE/ACM%2044th%20International%20Conference%20on%20Software%20Engineering%20(ICSE)&rft.au=Yang,%20Zhou&rft.date=2022-05-21&rft.spage=1482&rft.epage=1493&rft.pages=1482-1493&rft.eissn=1558-1225&rft.isbn=9781450392211&rft.isbn_list=1450392210&rft.coden=IEEPAD&rft_id=info:doi/10.1145/3510003.3510146&rft.eisbn=9781450392211&rft.eisbn_list=1450392210&rft_dat=%3Cacm_CHZPO%3Eacm_books_10_1145_3510003_3510146%3C/acm_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a275t-5f9f67aa5afaf40c479822395de76b71668c07504a4b01e5e2c9f9610556b42b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9794089&rfr_iscdi=true