Loading…
Large language models for generative information extraction: a survey
Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation. As a result, numerous works have been proposed to integrate LLMs for I...
Saved in:
Published in: | Frontiers of Computer Science 2024-12, Vol.18 (6), p.186357, Article 186357 |
---|---|
Main Authors: | , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c200t-9855ce365cde987f82edd668734cd069a5b77a39b7e83bef6661d3be564c49e93 |
container_end_page | |
container_issue | 6 |
container_start_page | 186357 |
container_title | Frontiers of Computer Science |
container_volume | 18 |
creator | Xu, Derong Chen, Wei Peng, Wenjun Zhang, Chao Xu, Tong Zhao, Xiangyu Wu, Xian Zheng, Yefeng Wang, Yang Chen, Enhong |
description | Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation. As a result, numerous works have been proposed to integrate LLMs for IE tasks based on a generative paradigm. To conduct a comprehensive systematic review and exploration of LLM efforts for IE tasks, in this study, we survey the most recent advancements in this field. We first present an extensive overview by categorizing these works in terms of various IE subtasks and techniques, and then we empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs. Based on a thorough review conducted, we identify several insights in technique and promising research directions that deserve further exploration in future studies. We maintain a public repository and consistently update related works and resources on GitHub (LLM4IE repository). |
doi_str_mv | 10.1007/s11704-024-40555-y |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3126807226</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3126807226</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-9855ce365cde987f82edd668734cd069a5b77a39b7e83bef6661d3be564c49e93</originalsourceid><addsrcrecordid>eNp9kE1Lw0AQhhdRsNT-AU8LnqOT2exu4k1K_YCCFz0vm80ktLRJ3U2K-fdujejN0zwM7zsDD2PXKdymAPoupKmGLAHMkgyklMl4xmYIhUwQhTr_Zcwv2SKELQAgoJSIM7ZaW98Q39m2GWyEfVfRLvC687yhlrztN0fimzYu9pG7ltNn76074T23PAz-SOMVu6jtLtDiZ87Z--PqbfmcrF-fXpYP68QhQJ8UuZSOhJKuoiLXdY5UVUrlWmSuAlVYWWptRVFqykVJtVIqrSJIlbmsoELM2c109-C7j4FCb7bd4Nv40ogUVQ4aUcUUTinnuxA81ebgN3vrR5OCORkzkzETjZlvY2aMJTGVQgy3Dfm_0_-0vgAeum64</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3126807226</pqid></control><display><type>article</type><title>Large language models for generative information extraction: a survey</title><source>Springer Nature</source><creator>Xu, Derong ; Chen, Wei ; Peng, Wenjun ; Zhang, Chao ; Xu, Tong ; Zhao, Xiangyu ; Wu, Xian ; Zheng, Yefeng ; Wang, Yang ; Chen, Enhong</creator><creatorcontrib>Xu, Derong ; Chen, Wei ; Peng, Wenjun ; Zhang, Chao ; Xu, Tong ; Zhao, Xiangyu ; Wu, Xian ; Zheng, Yefeng ; Wang, Yang ; Chen, Enhong</creatorcontrib><description>Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation. As a result, numerous works have been proposed to integrate LLMs for IE tasks based on a generative paradigm. To conduct a comprehensive systematic review and exploration of LLM efforts for IE tasks, in this study, we survey the most recent advancements in this field. We first present an extensive overview by categorizing these works in terms of various IE subtasks and techniques, and then we empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs. Based on a thorough review conducted, we identify several insights in technique and promising research directions that deserve further exploration in future studies. We maintain a public repository and consistently update related works and resources on GitHub (LLM4IE repository).</description><identifier>ISSN: 2095-2228</identifier><identifier>EISSN: 2095-2236</identifier><identifier>DOI: 10.1007/s11704-024-40555-y</identifier><language>eng</language><publisher>Beijing: Higher Education Press</publisher><subject>Computer Science ; Empirical analysis ; Information retrieval ; Knowledge ; Language ; Large language models ; Natural language ; Natural language processing ; Repositories ; Review Article ; Taxonomy</subject><ispartof>Frontiers of Computer Science, 2024-12, Vol.18 (6), p.186357, Article 186357</ispartof><rights>The Author(s) 2024</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-9855ce365cde987f82edd668734cd069a5b77a39b7e83bef6661d3be564c49e93</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Xu, Derong</creatorcontrib><creatorcontrib>Chen, Wei</creatorcontrib><creatorcontrib>Peng, Wenjun</creatorcontrib><creatorcontrib>Zhang, Chao</creatorcontrib><creatorcontrib>Xu, Tong</creatorcontrib><creatorcontrib>Zhao, Xiangyu</creatorcontrib><creatorcontrib>Wu, Xian</creatorcontrib><creatorcontrib>Zheng, Yefeng</creatorcontrib><creatorcontrib>Wang, Yang</creatorcontrib><creatorcontrib>Chen, Enhong</creatorcontrib><title>Large language models for generative information extraction: a survey</title><title>Frontiers of Computer Science</title><addtitle>Front. Comput. Sci</addtitle><description>Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation. As a result, numerous works have been proposed to integrate LLMs for IE tasks based on a generative paradigm. To conduct a comprehensive systematic review and exploration of LLM efforts for IE tasks, in this study, we survey the most recent advancements in this field. We first present an extensive overview by categorizing these works in terms of various IE subtasks and techniques, and then we empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs. Based on a thorough review conducted, we identify several insights in technique and promising research directions that deserve further exploration in future studies. We maintain a public repository and consistently update related works and resources on GitHub (LLM4IE repository).</description><subject>Computer Science</subject><subject>Empirical analysis</subject><subject>Information retrieval</subject><subject>Knowledge</subject><subject>Language</subject><subject>Large language models</subject><subject>Natural language</subject><subject>Natural language processing</subject><subject>Repositories</subject><subject>Review Article</subject><subject>Taxonomy</subject><issn>2095-2228</issn><issn>2095-2236</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kE1Lw0AQhhdRsNT-AU8LnqOT2exu4k1K_YCCFz0vm80ktLRJ3U2K-fdujejN0zwM7zsDD2PXKdymAPoupKmGLAHMkgyklMl4xmYIhUwQhTr_Zcwv2SKELQAgoJSIM7ZaW98Q39m2GWyEfVfRLvC687yhlrztN0fimzYu9pG7ltNn76074T23PAz-SOMVu6jtLtDiZ87Z--PqbfmcrF-fXpYP68QhQJ8UuZSOhJKuoiLXdY5UVUrlWmSuAlVYWWptRVFqykVJtVIqrSJIlbmsoELM2c109-C7j4FCb7bd4Nv40ogUVQ4aUcUUTinnuxA81ebgN3vrR5OCORkzkzETjZlvY2aMJTGVQgy3Dfm_0_-0vgAeum64</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Xu, Derong</creator><creator>Chen, Wei</creator><creator>Peng, Wenjun</creator><creator>Zhang, Chao</creator><creator>Xu, Tong</creator><creator>Zhao, Xiangyu</creator><creator>Wu, Xian</creator><creator>Zheng, Yefeng</creator><creator>Wang, Yang</creator><creator>Chen, Enhong</creator><general>Higher Education Press</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope></search><sort><creationdate>20241201</creationdate><title>Large language models for generative information extraction: a survey</title><author>Xu, Derong ; Chen, Wei ; Peng, Wenjun ; Zhang, Chao ; Xu, Tong ; Zhao, Xiangyu ; Wu, Xian ; Zheng, Yefeng ; Wang, Yang ; Chen, Enhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-9855ce365cde987f82edd668734cd069a5b77a39b7e83bef6661d3be564c49e93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science</topic><topic>Empirical analysis</topic><topic>Information retrieval</topic><topic>Knowledge</topic><topic>Language</topic><topic>Large language models</topic><topic>Natural language</topic><topic>Natural language processing</topic><topic>Repositories</topic><topic>Review Article</topic><topic>Taxonomy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xu, Derong</creatorcontrib><creatorcontrib>Chen, Wei</creatorcontrib><creatorcontrib>Peng, Wenjun</creatorcontrib><creatorcontrib>Zhang, Chao</creatorcontrib><creatorcontrib>Xu, Tong</creatorcontrib><creatorcontrib>Zhao, Xiangyu</creatorcontrib><creatorcontrib>Wu, Xian</creatorcontrib><creatorcontrib>Zheng, Yefeng</creatorcontrib><creatorcontrib>Wang, Yang</creatorcontrib><creatorcontrib>Chen, Enhong</creatorcontrib><collection>SpringerOpen</collection><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>Frontiers of Computer Science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Derong</au><au>Chen, Wei</au><au>Peng, Wenjun</au><au>Zhang, Chao</au><au>Xu, Tong</au><au>Zhao, Xiangyu</au><au>Wu, Xian</au><au>Zheng, Yefeng</au><au>Wang, Yang</au><au>Chen, Enhong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Large language models for generative information extraction: a survey</atitle><jtitle>Frontiers of Computer Science</jtitle><stitle>Front. Comput. Sci</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>18</volume><issue>6</issue><spage>186357</spage><pages>186357-</pages><artnum>186357</artnum><issn>2095-2228</issn><eissn>2095-2236</eissn><abstract>Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation. As a result, numerous works have been proposed to integrate LLMs for IE tasks based on a generative paradigm. To conduct a comprehensive systematic review and exploration of LLM efforts for IE tasks, in this study, we survey the most recent advancements in this field. We first present an extensive overview by categorizing these works in terms of various IE subtasks and techniques, and then we empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs. Based on a thorough review conducted, we identify several insights in technique and promising research directions that deserve further exploration in future studies. We maintain a public repository and consistently update related works and resources on GitHub (LLM4IE repository).</abstract><cop>Beijing</cop><pub>Higher Education Press</pub><doi>10.1007/s11704-024-40555-y</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2095-2228 |
ispartof | Frontiers of Computer Science, 2024-12, Vol.18 (6), p.186357, Article 186357 |
issn | 2095-2228 2095-2236 |
language | eng |
recordid | cdi_proquest_journals_3126807226 |
source | Springer Nature |
subjects | Computer Science Empirical analysis Information retrieval Knowledge Language Large language models Natural language Natural language processing Repositories Review Article Taxonomy |
title | Large language models for generative information extraction: a survey |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T22%3A56%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Large%20language%20models%20for%20generative%20information%20extraction:%20a%20survey&rft.jtitle=Frontiers%20of%20Computer%20Science&rft.au=Xu,%20Derong&rft.date=2024-12-01&rft.volume=18&rft.issue=6&rft.spage=186357&rft.pages=186357-&rft.artnum=186357&rft.issn=2095-2228&rft.eissn=2095-2236&rft_id=info:doi/10.1007/s11704-024-40555-y&rft_dat=%3Cproquest_cross%3E3126807226%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c200t-9855ce365cde987f82edd668734cd069a5b77a39b7e83bef6661d3be564c49e93%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3126807226&rft_id=info:pmid/&rfr_iscdi=true |