Loading…
Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases
Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and source code, aiming to lift binary code to human-readable content relevant to source code, thereby bridging the binary-source semantic gap. Recent advancements in uni-modal code model pre-training, particularly...
Saved in:
Published in: | arXiv.org 2024-10 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Su, Zian Xu, Xiangzhe Huang, Ziyang Zhang, Kaiyuan Zhang, Xiangyu |
description | Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and source code, aiming to lift binary code to human-readable content relevant to source code, thereby bridging the binary-source semantic gap. Recent advancements in uni-modal code model pre-training, particularly in generative Source Code Foundation Models (SCFMs) and binary understanding models, have laid the groundwork for transfer learning applicable to HOBRE. However, existing approaches for HOBRE rely heavily on uni-modal models like SCFMs for supervised fine-tuning or general LLMs for prompting, resulting in sub-optimal performance. Inspired by recent progress in large multi-modal models, we propose that it is possible to harness the strengths of uni-modal code models from both sides to bridge the semantic gap effectively. In this paper, we introduce a novel probe-and-recover framework that incorporates a binary-source encoder-decoder model and black-box LLMs for binary analysis. Our approach leverages the pre-trained knowledge within SCFMs to synthesize relevant, symbol-rich code fragments as context. This additional context enables black-box LLMs to enhance recovery accuracy. We demonstrate significant improvements in zero-shot binary summarization and binary function name recovery, with a 10.3% relative gain in CHRF and a 16.7% relative gain in a GPT4-based metric for summarization, as well as a 6.7% and 7.4% absolute increase in token-level precision and recall for name recovery, respectively. These results highlight the effectiveness of our approach in automating and improving binary code analysis. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3122778613</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3122778613</sourcerecordid><originalsourceid>FETCH-proquest_journals_31227786133</originalsourceid><addsrcrecordid>eNqNjc0KgkAUhYcgSMp3uNBa0Jn82ZYkQdQm93LLayjDTM1VwrdvFj1Aq8PhOx9nIQKpVBIVOylXImQe4jiWWS7TVAXierOTexCUtiWo7GRaHHtr4OK7ZkBHUDs03JHDuyY49AbdDHuDeuae4WzsR1P79ASZeCOWHWqm8Jdrsa2OdXmKXs6-J-KxGfyfl7lRiZR5XmSJUv-tvrAOPfI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3122778613</pqid></control><display><type>article</type><title>Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases</title><source>Publicly Available Content Database</source><creator>Su, Zian ; Xu, Xiangzhe ; Huang, Ziyang ; Zhang, Kaiyuan ; Zhang, Xiangyu</creator><creatorcontrib>Su, Zian ; Xu, Xiangzhe ; Huang, Ziyang ; Zhang, Kaiyuan ; Zhang, Xiangyu</creatorcontrib><description>Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and source code, aiming to lift binary code to human-readable content relevant to source code, thereby bridging the binary-source semantic gap. Recent advancements in uni-modal code model pre-training, particularly in generative Source Code Foundation Models (SCFMs) and binary understanding models, have laid the groundwork for transfer learning applicable to HOBRE. However, existing approaches for HOBRE rely heavily on uni-modal models like SCFMs for supervised fine-tuning or general LLMs for prompting, resulting in sub-optimal performance. Inspired by recent progress in large multi-modal models, we propose that it is possible to harness the strengths of uni-modal code models from both sides to bridge the semantic gap effectively. In this paper, we introduce a novel probe-and-recover framework that incorporates a binary-source encoder-decoder model and black-box LLMs for binary analysis. Our approach leverages the pre-trained knowledge within SCFMs to synthesize relevant, symbol-rich code fragments as context. This additional context enables black-box LLMs to enhance recovery accuracy. We demonstrate significant improvements in zero-shot binary summarization and binary function name recovery, with a 10.3% relative gain in CHRF and a 16.7% relative gain in a GPT4-based metric for summarization, as well as a 6.7% and 7.4% absolute increase in token-level precision and recall for name recovery, respectively. These results highlight the effectiveness of our approach in automating and improving binary code analysis.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Binary codes ; Black boxes ; Context ; Encoders-Decoders ; Knowledge bases (artificial intelligence) ; Recovery ; Semantics ; Source code</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3122778613?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Su, Zian</creatorcontrib><creatorcontrib>Xu, Xiangzhe</creatorcontrib><creatorcontrib>Huang, Ziyang</creatorcontrib><creatorcontrib>Zhang, Kaiyuan</creatorcontrib><creatorcontrib>Zhang, Xiangyu</creatorcontrib><title>Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases</title><title>arXiv.org</title><description>Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and source code, aiming to lift binary code to human-readable content relevant to source code, thereby bridging the binary-source semantic gap. Recent advancements in uni-modal code model pre-training, particularly in generative Source Code Foundation Models (SCFMs) and binary understanding models, have laid the groundwork for transfer learning applicable to HOBRE. However, existing approaches for HOBRE rely heavily on uni-modal models like SCFMs for supervised fine-tuning or general LLMs for prompting, resulting in sub-optimal performance. Inspired by recent progress in large multi-modal models, we propose that it is possible to harness the strengths of uni-modal code models from both sides to bridge the semantic gap effectively. In this paper, we introduce a novel probe-and-recover framework that incorporates a binary-source encoder-decoder model and black-box LLMs for binary analysis. Our approach leverages the pre-trained knowledge within SCFMs to synthesize relevant, symbol-rich code fragments as context. This additional context enables black-box LLMs to enhance recovery accuracy. We demonstrate significant improvements in zero-shot binary summarization and binary function name recovery, with a 10.3% relative gain in CHRF and a 16.7% relative gain in a GPT4-based metric for summarization, as well as a 6.7% and 7.4% absolute increase in token-level precision and recall for name recovery, respectively. These results highlight the effectiveness of our approach in automating and improving binary code analysis.</description><subject>Binary codes</subject><subject>Black boxes</subject><subject>Context</subject><subject>Encoders-Decoders</subject><subject>Knowledge bases (artificial intelligence)</subject><subject>Recovery</subject><subject>Semantics</subject><subject>Source code</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjc0KgkAUhYcgSMp3uNBa0Jn82ZYkQdQm93LLayjDTM1VwrdvFj1Aq8PhOx9nIQKpVBIVOylXImQe4jiWWS7TVAXierOTexCUtiWo7GRaHHtr4OK7ZkBHUDs03JHDuyY49AbdDHuDeuae4WzsR1P79ASZeCOWHWqm8Jdrsa2OdXmKXs6-J-KxGfyfl7lRiZR5XmSJUv-tvrAOPfI</recordid><startdate>20241030</startdate><enddate>20241030</enddate><creator>Su, Zian</creator><creator>Xu, Xiangzhe</creator><creator>Huang, Ziyang</creator><creator>Zhang, Kaiyuan</creator><creator>Zhang, Xiangyu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241030</creationdate><title>Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases</title><author>Su, Zian ; Xu, Xiangzhe ; Huang, Ziyang ; Zhang, Kaiyuan ; Zhang, Xiangyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31227786133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Binary codes</topic><topic>Black boxes</topic><topic>Context</topic><topic>Encoders-Decoders</topic><topic>Knowledge bases (artificial intelligence)</topic><topic>Recovery</topic><topic>Semantics</topic><topic>Source code</topic><toplevel>online_resources</toplevel><creatorcontrib>Su, Zian</creatorcontrib><creatorcontrib>Xu, Xiangzhe</creatorcontrib><creatorcontrib>Huang, Ziyang</creatorcontrib><creatorcontrib>Zhang, Kaiyuan</creatorcontrib><creatorcontrib>Zhang, Xiangyu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Su, Zian</au><au>Xu, Xiangzhe</au><au>Huang, Ziyang</au><au>Zhang, Kaiyuan</au><au>Zhang, Xiangyu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases</atitle><jtitle>arXiv.org</jtitle><date>2024-10-30</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and source code, aiming to lift binary code to human-readable content relevant to source code, thereby bridging the binary-source semantic gap. Recent advancements in uni-modal code model pre-training, particularly in generative Source Code Foundation Models (SCFMs) and binary understanding models, have laid the groundwork for transfer learning applicable to HOBRE. However, existing approaches for HOBRE rely heavily on uni-modal models like SCFMs for supervised fine-tuning or general LLMs for prompting, resulting in sub-optimal performance. Inspired by recent progress in large multi-modal models, we propose that it is possible to harness the strengths of uni-modal code models from both sides to bridge the semantic gap effectively. In this paper, we introduce a novel probe-and-recover framework that incorporates a binary-source encoder-decoder model and black-box LLMs for binary analysis. Our approach leverages the pre-trained knowledge within SCFMs to synthesize relevant, symbol-rich code fragments as context. This additional context enables black-box LLMs to enhance recovery accuracy. We demonstrate significant improvements in zero-shot binary summarization and binary function name recovery, with a 10.3% relative gain in CHRF and a 16.7% relative gain in a GPT4-based metric for summarization, as well as a 6.7% and 7.4% absolute increase in token-level precision and recall for name recovery, respectively. These results highlight the effectiveness of our approach in automating and improving binary code analysis.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3122778613 |
source | Publicly Available Content Database |
subjects | Binary codes Black boxes Context Encoders-Decoders Knowledge bases (artificial intelligence) Recovery Semantics Source code |
title | Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T08%3A06%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Source%20Code%20Foundation%20Models%20are%20Transferable%20Binary%20Analysis%20Knowledge%20Bases&rft.jtitle=arXiv.org&rft.au=Su,%20Zian&rft.date=2024-10-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3122778613%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31227786133%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3122778613&rft_id=info:pmid/&rfr_iscdi=true |