Loading…

Proof Automation with Large Language Models

Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these tools requires significant manual effort and expertise. While Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural lang...

Full description

Saved in:
Bibliographic Details
Main Authors: Lu, Minghai, Delaware, Benjamin, Zhang, Tianyi
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 1520
container_issue
container_start_page 1509
container_title
container_volume
creator Lu, Minghai
Delaware, Benjamin
Zhang, Tianyi
description Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these tools requires significant manual effort and expertise. While Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural language, they are less effective at generating formal proofs in interactive theorem provers. In this paper, we conduct a formative study to identify common mistakes made by LLMs when asked to generate formal proofs. By analyzing 520 proof generation errors made by GPT-3.5, we found that GPT-3.5 often identified the correct high-level structure of a proof, but struggled to get the lower-level details correct. Based on this insight, we propose PALM, a novel generate-then-repair approach that first prompts an LLM to generate an initial proof and then leverages targeted symbolic methods to iteratively repair low-level problems. We evaluate PALM on a large dataset that includes more than 10K theorems. Our results show that PALM significantly outperforms other state-of-the-art approaches, successfully proving 76.6% to 180.4% more theorems. Moreover, PALM proves 1270 theorems beyond the reach of existing approaches. We also demonstrate the generalizability of PALM across different LLMs.
doi_str_mv 10.1145/3691620.3695521
format conference_proceeding
fullrecord <record><control><sourceid>acm_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10764874</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10764874</ieee_id><sourcerecordid>acm_books_10_1145_3691620_3695521</sourcerecordid><originalsourceid>FETCH-LOGICAL-a934-26950e94200f3dce551ace07e710475b852427434304a1c0ca5e2ea49ada29a3</originalsourceid><addsrcrecordid>eNqNkM1Lw0AQxVdFsNSevXjoUZDEmd3ZbPdYil8QUdD7MkkmNdh2JWkR_3tX2pMnL_MGfo9h3lPqAiFHJHtjCo-FhjyptRqP1MQ7PyMAh5pm7liNdEEmQ-v0yR92pibD0FWQkC0Qi5G6fuljbKfz3TauedvFzfSr275PS-6XkuZmueO0PMVGVsO5Om15NcjkoGP1enf7tnjIyuf7x8W8zNgbynR6C8STBmhNU4u1yLWAE4dAzlYzq0k7MmSAGGuo2YoWJs8Na89mrC73VzsRCZ99t-b-OyC4IgWghK_2mOt1qGL8GBILv8WEQzHhUEyy5v-0hqrvpDU_pyFb0Q</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Proof Automation with Large Language Models</title><source>IEEE Xplore All Conference Series</source><creator>Lu, Minghai ; Delaware, Benjamin ; Zhang, Tianyi</creator><creatorcontrib>Lu, Minghai ; Delaware, Benjamin ; Zhang, Tianyi</creatorcontrib><description>Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these tools requires significant manual effort and expertise. While Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural language, they are less effective at generating formal proofs in interactive theorem provers. In this paper, we conduct a formative study to identify common mistakes made by LLMs when asked to generate formal proofs. By analyzing 520 proof generation errors made by GPT-3.5, we found that GPT-3.5 often identified the correct high-level structure of a proof, but struggled to get the lower-level details correct. Based on this insight, we propose PALM, a novel generate-then-repair approach that first prompts an LLM to generate an initial proof and then leverages targeted symbolic methods to iteratively repair low-level problems. We evaluate PALM on a large dataset that includes more than 10K theorems. Our results show that PALM significantly outperforms other state-of-the-art approaches, successfully proving 76.6% to 180.4% more theorems. Moreover, PALM proves 1270 theorems beyond the reach of existing approaches. We also demonstrate the generalizability of PALM across different LLMs.</description><identifier>ISBN: 9798400712487</identifier><identifier>EISSN: 2643-1572</identifier><identifier>EISBN: 9798400712487</identifier><identifier>DOI: 10.1145/3691620.3695521</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Automation ; Backtracking ; Formal software verification ; Large language models ; Maintenance engineering ; Manuals ; Natural languages ; Software ; Software and its engineering -- Software creation and management -- Software verification and validation -- Formal software verification ; Software and its engineering -- Software organization and properties -- Software functional properties -- Formal methods -- Software verification ; Software and its engineering → Software verification ; Software engineering</subject><ispartof>IEEE/ACM International Conference on Automated Software Engineering : [proceedings], 2024, p.1509-1520</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-1016-6261 ; 0000-0002-5468-9347 ; 0009-0001-0136-3204</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10764874$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10764874$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Lu, Minghai</creatorcontrib><creatorcontrib>Delaware, Benjamin</creatorcontrib><creatorcontrib>Zhang, Tianyi</creatorcontrib><title>Proof Automation with Large Language Models</title><title>IEEE/ACM International Conference on Automated Software Engineering : [proceedings]</title><addtitle>ASE</addtitle><description>Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these tools requires significant manual effort and expertise. While Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural language, they are less effective at generating formal proofs in interactive theorem provers. In this paper, we conduct a formative study to identify common mistakes made by LLMs when asked to generate formal proofs. By analyzing 520 proof generation errors made by GPT-3.5, we found that GPT-3.5 often identified the correct high-level structure of a proof, but struggled to get the lower-level details correct. Based on this insight, we propose PALM, a novel generate-then-repair approach that first prompts an LLM to generate an initial proof and then leverages targeted symbolic methods to iteratively repair low-level problems. We evaluate PALM on a large dataset that includes more than 10K theorems. Our results show that PALM significantly outperforms other state-of-the-art approaches, successfully proving 76.6% to 180.4% more theorems. Moreover, PALM proves 1270 theorems beyond the reach of existing approaches. We also demonstrate the generalizability of PALM across different LLMs.</description><subject>Automation</subject><subject>Backtracking</subject><subject>Formal software verification</subject><subject>Large language models</subject><subject>Maintenance engineering</subject><subject>Manuals</subject><subject>Natural languages</subject><subject>Software</subject><subject>Software and its engineering -- Software creation and management -- Software verification and validation -- Formal software verification</subject><subject>Software and its engineering -- Software organization and properties -- Software functional properties -- Formal methods -- Software verification</subject><subject>Software and its engineering → Software verification</subject><subject>Software engineering</subject><issn>2643-1572</issn><isbn>9798400712487</isbn><isbn>9798400712487</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNqNkM1Lw0AQxVdFsNSevXjoUZDEmd3ZbPdYil8QUdD7MkkmNdh2JWkR_3tX2pMnL_MGfo9h3lPqAiFHJHtjCo-FhjyptRqP1MQ7PyMAh5pm7liNdEEmQ-v0yR92pibD0FWQkC0Qi5G6fuljbKfz3TauedvFzfSr275PS-6XkuZmueO0PMVGVsO5Om15NcjkoGP1enf7tnjIyuf7x8W8zNgbynR6C8STBmhNU4u1yLWAE4dAzlYzq0k7MmSAGGuo2YoWJs8Na89mrC73VzsRCZ99t-b-OyC4IgWghK_2mOt1qGL8GBILv8WEQzHhUEyy5v-0hqrvpDU_pyFb0Q</recordid><startdate>20241027</startdate><enddate>20241027</enddate><creator>Lu, Minghai</creator><creator>Delaware, Benjamin</creator><creator>Zhang, Tianyi</creator><general>ACM</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope><orcidid>https://orcid.org/0000-0002-1016-6261</orcidid><orcidid>https://orcid.org/0000-0002-5468-9347</orcidid><orcidid>https://orcid.org/0009-0001-0136-3204</orcidid></search><sort><creationdate>20241027</creationdate><title>Proof Automation with Large Language Models</title><author>Lu, Minghai ; Delaware, Benjamin ; Zhang, Tianyi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a934-26950e94200f3dce551ace07e710475b852427434304a1c0ca5e2ea49ada29a3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Automation</topic><topic>Backtracking</topic><topic>Formal software verification</topic><topic>Large language models</topic><topic>Maintenance engineering</topic><topic>Manuals</topic><topic>Natural languages</topic><topic>Software</topic><topic>Software and its engineering -- Software creation and management -- Software verification and validation -- Formal software verification</topic><topic>Software and its engineering -- Software organization and properties -- Software functional properties -- Formal methods -- Software verification</topic><topic>Software and its engineering → Software verification</topic><topic>Software engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>Lu, Minghai</creatorcontrib><creatorcontrib>Delaware, Benjamin</creatorcontrib><creatorcontrib>Zhang, Tianyi</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lu, Minghai</au><au>Delaware, Benjamin</au><au>Zhang, Tianyi</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Proof Automation with Large Language Models</atitle><btitle>IEEE/ACM International Conference on Automated Software Engineering : [proceedings]</btitle><stitle>ASE</stitle><date>2024-10-27</date><risdate>2024</risdate><spage>1509</spage><epage>1520</epage><pages>1509-1520</pages><eissn>2643-1572</eissn><isbn>9798400712487</isbn><eisbn>9798400712487</eisbn><coden>IEEPAD</coden><abstract>Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these tools requires significant manual effort and expertise. While Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural language, they are less effective at generating formal proofs in interactive theorem provers. In this paper, we conduct a formative study to identify common mistakes made by LLMs when asked to generate formal proofs. By analyzing 520 proof generation errors made by GPT-3.5, we found that GPT-3.5 often identified the correct high-level structure of a proof, but struggled to get the lower-level details correct. Based on this insight, we propose PALM, a novel generate-then-repair approach that first prompts an LLM to generate an initial proof and then leverages targeted symbolic methods to iteratively repair low-level problems. We evaluate PALM on a large dataset that includes more than 10K theorems. Our results show that PALM significantly outperforms other state-of-the-art approaches, successfully proving 76.6% to 180.4% more theorems. Moreover, PALM proves 1270 theorems beyond the reach of existing approaches. We also demonstrate the generalizability of PALM across different LLMs.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3691620.3695521</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-1016-6261</orcidid><orcidid>https://orcid.org/0000-0002-5468-9347</orcidid><orcidid>https://orcid.org/0009-0001-0136-3204</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 9798400712487
ispartof IEEE/ACM International Conference on Automated Software Engineering : [proceedings], 2024, p.1509-1520
issn 2643-1572
language eng
recordid cdi_ieee_primary_10764874
source IEEE Xplore All Conference Series
subjects Automation
Backtracking
Formal software verification
Large language models
Maintenance engineering
Manuals
Natural languages
Software
Software and its engineering -- Software creation and management -- Software verification and validation -- Formal software verification
Software and its engineering -- Software organization and properties -- Software functional properties -- Formal methods -- Software verification
Software and its engineering → Software verification
Software engineering
title Proof Automation with Large Language Models
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T21%3A45%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Proof%20Automation%20with%20Large%20Language%20Models&rft.btitle=IEEE/ACM%20International%20Conference%20on%20Automated%20Software%20Engineering%20:%20%5Bproceedings%5D&rft.au=Lu,%20Minghai&rft.date=2024-10-27&rft.spage=1509&rft.epage=1520&rft.pages=1509-1520&rft.eissn=2643-1572&rft.isbn=9798400712487&rft.coden=IEEPAD&rft_id=info:doi/10.1145/3691620.3695521&rft.eisbn=9798400712487&rft_dat=%3Cacm_CHZPO%3Eacm_books_10_1145_3691620_3695521%3C/acm_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a934-26950e94200f3dce551ace07e710475b852427434304a1c0ca5e2ea49ada29a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10764874&rfr_iscdi=true