Loading…
Measuring Non-Adversarial Reproduction of Training Data in Large Language Models
Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of memorized text when prompted by a motivated adver...
Saved in:
Published in: | arXiv.org 2024-11 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Aerni, Michael Rando, Javier Debenedetti, Edoardo Carlini, Nicholas Ippolito, Daphne Tramèr, Florian |
description | Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of memorized text when prompted by a motivated adversary. In this work, we investigate an intermediate regime of memorization that we call non-adversarial reproduction, where we quantify the overlap between model responses and pretraining data when responding to natural and benign prompts. For a variety of innocuous prompt categories (e.g., writing a letter or a tutorial), we show that up to 15% of the text output by popular conversational language models overlaps with snippets from the Internet. In worst cases, we find generations where 100% of the content can be found exactly online. For the same tasks, we find that human-written text has far less overlap with Internet data. We further study whether prompting strategies can close this reproduction gap between models and humans. While appropriate prompting can reduce non-adversarial reproduction on average, we find that mitigating worst-case reproduction of training data requires stronger defenses -- even for benign interactions. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3129863492</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3129863492</sourcerecordid><originalsourceid>FETCH-proquest_journals_31298634923</originalsourceid><addsrcrecordid>eNqNjM0KgkAUhYcgSMp3GGgt6IyaLqMfWmSEuJdLjjIiM3Wv0_M3QQ_Q5pwD38dZsEBImURFKsSKhURjHMci34kskwG7VwrIoTYDv1kT7bu3QgLUMPFaPdF27jFra7jteYOgzVc8wgxcG34FHJRPMzjwo7KdmmjDlj1MpMJfr9n2fGoOl8ifvZyiuR2tQ-NRKxNRFrlMSyH_sz48vD6X</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3129863492</pqid></control><display><type>article</type><title>Measuring Non-Adversarial Reproduction of Training Data in Large Language Models</title><source>Publicly Available Content Database</source><creator>Aerni, Michael ; Rando, Javier ; Debenedetti, Edoardo ; Carlini, Nicholas ; Ippolito, Daphne ; Tramèr, Florian</creator><creatorcontrib>Aerni, Michael ; Rando, Javier ; Debenedetti, Edoardo ; Carlini, Nicholas ; Ippolito, Daphne ; Tramèr, Florian</creatorcontrib><description>Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of memorized text when prompted by a motivated adversary. In this work, we investigate an intermediate regime of memorization that we call non-adversarial reproduction, where we quantify the overlap between model responses and pretraining data when responding to natural and benign prompts. For a variety of innocuous prompt categories (e.g., writing a letter or a tutorial), we show that up to 15% of the text output by popular conversational language models overlaps with snippets from the Internet. In worst cases, we find generations where 100% of the content can be found exactly online. For the same tasks, we find that human-written text has far less overlap with Internet data. We further study whether prompting strategies can close this reproduction gap between models and humans. While appropriate prompting can reduce non-adversarial reproduction on average, we find that mitigating worst-case reproduction of training data requires stronger defenses -- even for benign interactions.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Internet ; Large language models</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3129863492?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25730,36988,44565</link.rule.ids></links><search><creatorcontrib>Aerni, Michael</creatorcontrib><creatorcontrib>Rando, Javier</creatorcontrib><creatorcontrib>Debenedetti, Edoardo</creatorcontrib><creatorcontrib>Carlini, Nicholas</creatorcontrib><creatorcontrib>Ippolito, Daphne</creatorcontrib><creatorcontrib>Tramèr, Florian</creatorcontrib><title>Measuring Non-Adversarial Reproduction of Training Data in Large Language Models</title><title>arXiv.org</title><description>Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of memorized text when prompted by a motivated adversary. In this work, we investigate an intermediate regime of memorization that we call non-adversarial reproduction, where we quantify the overlap between model responses and pretraining data when responding to natural and benign prompts. For a variety of innocuous prompt categories (e.g., writing a letter or a tutorial), we show that up to 15% of the text output by popular conversational language models overlaps with snippets from the Internet. In worst cases, we find generations where 100% of the content can be found exactly online. For the same tasks, we find that human-written text has far less overlap with Internet data. We further study whether prompting strategies can close this reproduction gap between models and humans. While appropriate prompting can reduce non-adversarial reproduction on average, we find that mitigating worst-case reproduction of training data requires stronger defenses -- even for benign interactions.</description><subject>Internet</subject><subject>Large language models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjM0KgkAUhYcgSMp3GGgt6IyaLqMfWmSEuJdLjjIiM3Wv0_M3QQ_Q5pwD38dZsEBImURFKsSKhURjHMci34kskwG7VwrIoTYDv1kT7bu3QgLUMPFaPdF27jFra7jteYOgzVc8wgxcG34FHJRPMzjwo7KdmmjDlj1MpMJfr9n2fGoOl8ifvZyiuR2tQ-NRKxNRFrlMSyH_sz48vD6X</recordid><startdate>20241115</startdate><enddate>20241115</enddate><creator>Aerni, Michael</creator><creator>Rando, Javier</creator><creator>Debenedetti, Edoardo</creator><creator>Carlini, Nicholas</creator><creator>Ippolito, Daphne</creator><creator>Tramèr, Florian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241115</creationdate><title>Measuring Non-Adversarial Reproduction of Training Data in Large Language Models</title><author>Aerni, Michael ; Rando, Javier ; Debenedetti, Edoardo ; Carlini, Nicholas ; Ippolito, Daphne ; Tramèr, Florian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31298634923</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Internet</topic><topic>Large language models</topic><toplevel>online_resources</toplevel><creatorcontrib>Aerni, Michael</creatorcontrib><creatorcontrib>Rando, Javier</creatorcontrib><creatorcontrib>Debenedetti, Edoardo</creatorcontrib><creatorcontrib>Carlini, Nicholas</creatorcontrib><creatorcontrib>Ippolito, Daphne</creatorcontrib><creatorcontrib>Tramèr, Florian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Aerni, Michael</au><au>Rando, Javier</au><au>Debenedetti, Edoardo</au><au>Carlini, Nicholas</au><au>Ippolito, Daphne</au><au>Tramèr, Florian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Measuring Non-Adversarial Reproduction of Training Data in Large Language Models</atitle><jtitle>arXiv.org</jtitle><date>2024-11-15</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of memorized text when prompted by a motivated adversary. In this work, we investigate an intermediate regime of memorization that we call non-adversarial reproduction, where we quantify the overlap between model responses and pretraining data when responding to natural and benign prompts. For a variety of innocuous prompt categories (e.g., writing a letter or a tutorial), we show that up to 15% of the text output by popular conversational language models overlaps with snippets from the Internet. In worst cases, we find generations where 100% of the content can be found exactly online. For the same tasks, we find that human-written text has far less overlap with Internet data. We further study whether prompting strategies can close this reproduction gap between models and humans. While appropriate prompting can reduce non-adversarial reproduction on average, we find that mitigating worst-case reproduction of training data requires stronger defenses -- even for benign interactions.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3129863492 |
source | Publicly Available Content Database |
subjects | Internet Large language models |
title | Measuring Non-Adversarial Reproduction of Training Data in Large Language Models |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-24T14%3A29%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Measuring%20Non-Adversarial%20Reproduction%20of%20Training%20Data%20in%20Large%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Aerni,%20Michael&rft.date=2024-11-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3129863492%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31298634923%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3129863492&rft_id=info:pmid/&rfr_iscdi=true |