Loading…

A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation

Large Language Models (LLMs) have exploded a new heatwave of AI, for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safet...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-05
Main Authors: Huang, Xiaowei, Ruan, Wenjie, Huang, Wei, Jin, Gaojie, Dong, Yi, Wu, Changshun, Bensalem, Saddek, Mu, Ronghui, Qi, Yi, Zhao, Xingyu, Cai, Kaiwen, Zhang, Yanghao, Wu, Sihao, Xu, Peipei, Wu, Dengyu, Freitas, Andre, Mustafa, Mustafa A
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Huang, Xiaowei
Ruan, Wenjie
Huang, Wei
Jin, Gaojie
Dong, Yi
Wu, Changshun
Bensalem, Saddek
Mu, Ronghui
Qi, Yi
Zhao, Xingyu
Cai, Kaiwen
Zhang, Yanghao
Wu, Sihao
Xu, Peipei
Wu, Dengyu
Freitas, Andre
Mustafa, Mustafa A
description Large Language Models (LLMs) have exploded a new heatwave of AI, for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safety and trustworthiness. First, we review known vulnerabilities of the LLMs, categorising them into inherent issues, intended attacks, and unintended bugs. Then, we consider if and how the Verification and Validation (V&V) techniques, which have been widely developed for traditional software and deep learning models such as convolutional neural networks, can be integrated and further extended throughout the lifecycle of the LLMs to provide rigorous analysis to the safety and trustworthiness of LLMs and their applications. Specifically, we consider four complementary techniques: falsification and evaluation, verification, runtime monitoring, and ethical use. Considering the fast development of LLMs, this survey does not intend to be complete (although it includes 300 references), especially when it comes to the applications of LLMs in various domains, but rather a collection of organised literature reviews and discussions to support the quick understanding of the safety and trustworthiness issues from the perspective of V&V.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2817231658</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2817231658</sourcerecordid><originalsourceid>FETCH-proquest_journals_28172316583</originalsourceid><addsrcrecordid>eNqNjMsKgkAYhYcgSMp3GGgt6ExethFFi1opbmPIXx2R-WsuhW_fJD1Am3PhO5wFCRjnSVTsGFuR0JghjmOW5SxNeUBwT0unXzBRbGkpWrATFaqhlXbGvlHbXiow5ksvQnfgVXVO-HDFBkZDba_Rdb13z0DNyxq0bOVdWIlqfqvFKJu5bsiyFaOB8Odrsj0dq8M5emh8OjD2NqDTyqMbK5Kc8SRLC_7f6gNA_koX</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2817231658</pqid></control><display><type>article</type><title>A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation</title><source>Publicly Available Content (ProQuest)</source><creator>Huang, Xiaowei ; Ruan, Wenjie ; Huang, Wei ; Jin, Gaojie ; Dong, Yi ; Wu, Changshun ; Bensalem, Saddek ; Mu, Ronghui ; Qi, Yi ; Zhao, Xingyu ; Cai, Kaiwen ; Zhang, Yanghao ; Wu, Sihao ; Xu, Peipei ; Wu, Dengyu ; Freitas, Andre ; Mustafa, Mustafa A</creator><creatorcontrib>Huang, Xiaowei ; Ruan, Wenjie ; Huang, Wei ; Jin, Gaojie ; Dong, Yi ; Wu, Changshun ; Bensalem, Saddek ; Mu, Ronghui ; Qi, Yi ; Zhao, Xingyu ; Cai, Kaiwen ; Zhang, Yanghao ; Wu, Sihao ; Xu, Peipei ; Wu, Dengyu ; Freitas, Andre ; Mustafa, Mustafa A</creatorcontrib><description>Large Language Models (LLMs) have exploded a new heatwave of AI, for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safety and trustworthiness. First, we review known vulnerabilities of the LLMs, categorising them into inherent issues, intended attacks, and unintended bugs. Then, we consider if and how the Verification and Validation (V&amp;V) techniques, which have been widely developed for traditional software and deep learning models such as convolutional neural networks, can be integrated and further extended throughout the lifecycle of the LLMs to provide rigorous analysis to the safety and trustworthiness of LLMs and their applications. Specifically, we consider four complementary techniques: falsification and evaluation, verification, runtime monitoring, and ethical use. Considering the fast development of LLMs, this survey does not intend to be complete (although it includes 300 references), especially when it comes to the applications of LLMs in various domains, but rather a collection of organised literature reviews and discussions to support the quick understanding of the safety and trustworthiness issues from the perspective of V&amp;V.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Domains ; Industrial applications ; Large language models ; Literature reviews ; Machine learning ; Safety ; Trustworthiness ; Verification</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2817231658?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,37011,44589</link.rule.ids></links><search><creatorcontrib>Huang, Xiaowei</creatorcontrib><creatorcontrib>Ruan, Wenjie</creatorcontrib><creatorcontrib>Huang, Wei</creatorcontrib><creatorcontrib>Jin, Gaojie</creatorcontrib><creatorcontrib>Dong, Yi</creatorcontrib><creatorcontrib>Wu, Changshun</creatorcontrib><creatorcontrib>Bensalem, Saddek</creatorcontrib><creatorcontrib>Mu, Ronghui</creatorcontrib><creatorcontrib>Qi, Yi</creatorcontrib><creatorcontrib>Zhao, Xingyu</creatorcontrib><creatorcontrib>Cai, Kaiwen</creatorcontrib><creatorcontrib>Zhang, Yanghao</creatorcontrib><creatorcontrib>Wu, Sihao</creatorcontrib><creatorcontrib>Xu, Peipei</creatorcontrib><creatorcontrib>Wu, Dengyu</creatorcontrib><creatorcontrib>Freitas, Andre</creatorcontrib><creatorcontrib>Mustafa, Mustafa A</creatorcontrib><title>A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation</title><title>arXiv.org</title><description>Large Language Models (LLMs) have exploded a new heatwave of AI, for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safety and trustworthiness. First, we review known vulnerabilities of the LLMs, categorising them into inherent issues, intended attacks, and unintended bugs. Then, we consider if and how the Verification and Validation (V&amp;V) techniques, which have been widely developed for traditional software and deep learning models such as convolutional neural networks, can be integrated and further extended throughout the lifecycle of the LLMs to provide rigorous analysis to the safety and trustworthiness of LLMs and their applications. Specifically, we consider four complementary techniques: falsification and evaluation, verification, runtime monitoring, and ethical use. Considering the fast development of LLMs, this survey does not intend to be complete (although it includes 300 references), especially when it comes to the applications of LLMs in various domains, but rather a collection of organised literature reviews and discussions to support the quick understanding of the safety and trustworthiness issues from the perspective of V&amp;V.</description><subject>Artificial neural networks</subject><subject>Domains</subject><subject>Industrial applications</subject><subject>Large language models</subject><subject>Literature reviews</subject><subject>Machine learning</subject><subject>Safety</subject><subject>Trustworthiness</subject><subject>Verification</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjMsKgkAYhYcgSMp3GGgt6ExethFFi1opbmPIXx2R-WsuhW_fJD1Am3PhO5wFCRjnSVTsGFuR0JghjmOW5SxNeUBwT0unXzBRbGkpWrATFaqhlXbGvlHbXiow5ksvQnfgVXVO-HDFBkZDba_Rdb13z0DNyxq0bOVdWIlqfqvFKJu5bsiyFaOB8Odrsj0dq8M5emh8OjD2NqDTyqMbK5Kc8SRLC_7f6gNA_koX</recordid><startdate>20230519</startdate><enddate>20230519</enddate><creator>Huang, Xiaowei</creator><creator>Ruan, Wenjie</creator><creator>Huang, Wei</creator><creator>Jin, Gaojie</creator><creator>Dong, Yi</creator><creator>Wu, Changshun</creator><creator>Bensalem, Saddek</creator><creator>Mu, Ronghui</creator><creator>Qi, Yi</creator><creator>Zhao, Xingyu</creator><creator>Cai, Kaiwen</creator><creator>Zhang, Yanghao</creator><creator>Wu, Sihao</creator><creator>Xu, Peipei</creator><creator>Wu, Dengyu</creator><creator>Freitas, Andre</creator><creator>Mustafa, Mustafa A</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230519</creationdate><title>A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation</title><author>Huang, Xiaowei ; Ruan, Wenjie ; Huang, Wei ; Jin, Gaojie ; Dong, Yi ; Wu, Changshun ; Bensalem, Saddek ; Mu, Ronghui ; Qi, Yi ; Zhao, Xingyu ; Cai, Kaiwen ; Zhang, Yanghao ; Wu, Sihao ; Xu, Peipei ; Wu, Dengyu ; Freitas, Andre ; Mustafa, Mustafa A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28172316583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Domains</topic><topic>Industrial applications</topic><topic>Large language models</topic><topic>Literature reviews</topic><topic>Machine learning</topic><topic>Safety</topic><topic>Trustworthiness</topic><topic>Verification</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Xiaowei</creatorcontrib><creatorcontrib>Ruan, Wenjie</creatorcontrib><creatorcontrib>Huang, Wei</creatorcontrib><creatorcontrib>Jin, Gaojie</creatorcontrib><creatorcontrib>Dong, Yi</creatorcontrib><creatorcontrib>Wu, Changshun</creatorcontrib><creatorcontrib>Bensalem, Saddek</creatorcontrib><creatorcontrib>Mu, Ronghui</creatorcontrib><creatorcontrib>Qi, Yi</creatorcontrib><creatorcontrib>Zhao, Xingyu</creatorcontrib><creatorcontrib>Cai, Kaiwen</creatorcontrib><creatorcontrib>Zhang, Yanghao</creatorcontrib><creatorcontrib>Wu, Sihao</creatorcontrib><creatorcontrib>Xu, Peipei</creatorcontrib><creatorcontrib>Wu, Dengyu</creatorcontrib><creatorcontrib>Freitas, Andre</creatorcontrib><creatorcontrib>Mustafa, Mustafa A</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huang, Xiaowei</au><au>Ruan, Wenjie</au><au>Huang, Wei</au><au>Jin, Gaojie</au><au>Dong, Yi</au><au>Wu, Changshun</au><au>Bensalem, Saddek</au><au>Mu, Ronghui</au><au>Qi, Yi</au><au>Zhao, Xingyu</au><au>Cai, Kaiwen</au><au>Zhang, Yanghao</au><au>Wu, Sihao</au><au>Xu, Peipei</au><au>Wu, Dengyu</au><au>Freitas, Andre</au><au>Mustafa, Mustafa A</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation</atitle><jtitle>arXiv.org</jtitle><date>2023-05-19</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) have exploded a new heatwave of AI, for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safety and trustworthiness. First, we review known vulnerabilities of the LLMs, categorising them into inherent issues, intended attacks, and unintended bugs. Then, we consider if and how the Verification and Validation (V&amp;V) techniques, which have been widely developed for traditional software and deep learning models such as convolutional neural networks, can be integrated and further extended throughout the lifecycle of the LLMs to provide rigorous analysis to the safety and trustworthiness of LLMs and their applications. Specifically, we consider four complementary techniques: falsification and evaluation, verification, runtime monitoring, and ethical use. Considering the fast development of LLMs, this survey does not intend to be complete (although it includes 300 references), especially when it comes to the applications of LLMs in various domains, but rather a collection of organised literature reviews and discussions to support the quick understanding of the safety and trustworthiness issues from the perspective of V&amp;V.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2817231658
source Publicly Available Content (ProQuest)
subjects Artificial neural networks
Domains
Industrial applications
Large language models
Literature reviews
Machine learning
Safety
Trustworthiness
Verification
title A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T03%3A00%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=A%20Survey%20of%20Safety%20and%20Trustworthiness%20of%20Large%20Language%20Models%20through%20the%20Lens%20of%20Verification%20and%20Validation&rft.jtitle=arXiv.org&rft.au=Huang,%20Xiaowei&rft.date=2023-05-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2817231658%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28172316583%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2817231658&rft_id=info:pmid/&rfr_iscdi=true