Loading…

FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding

The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, whe...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-10
Main Authors: Deng, Jingyang, Shen, Zhengyang, Wang, Boyang, Su, Lixin, Cheng, Suqi, Nie, Ying, Wang, Junfeng, Yin, Dawei, Ma, Jinwen
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Deng, Jingyang
Shen, Zhengyang
Wang, Boyang
Su, Lixin
Cheng, Suqi
Nie, Ying
Wang, Junfeng
Yin, Dawei
Ma, Jinwen
description The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, where crucial middle-context information is likely to be missed, and the distraction issue that the models lose focus due to overly extended contexts. To address these challenges, we propose the Context Filtering Language Model (FltLM), a novel integrated Long-Context LLM which enhances the ability of the model on multi-document question-answering (QA) tasks. Specifically, FltLM innovatively incorporates a context filter with a soft mask mechanism, identifying and dynamically excluding irrelevant content to concentrate on pertinent information for better comprehension and reasoning. Our approach not only mitigates these two challenges, but also enables the model to operate conveniently in a single forward pass. Experimental results demonstrate that FltLM significantly outperforms supervised fine-tuning and retrieval-based methods in complex QA scenarios, suggesting a promising solution for more accurate and reliable long-context natural language understanding applications.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3115225042</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3115225042</sourcerecordid><originalsourceid>FETCH-proquest_journals_31152250423</originalsourceid><addsrcrecordid>eNqNjLEKwjAYhIMgWLTv8INzoU0aFTcpLQrtpnMJ9m9ICYkmqfj4ZtDd5e747rgFSShjRXYoKV2R1Pspz3O621POWUJUo0PbHeFk4GICOulEwAFaa2RW2UjeAVrhJEY1chYxdHZADaN1UI8j3oN6IfymjdLxRBkJwgxwMwM6H2KMZEOWo9Ae06-vybapr9U5ezj7nNGHfrKzM7HqWVFwSnleUvbf6gP-rEdN</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3115225042</pqid></control><display><type>article</type><title>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</title><source>Publicly Available Content Database</source><creator>Deng, Jingyang ; Shen, Zhengyang ; Wang, Boyang ; Su, Lixin ; Cheng, Suqi ; Nie, Ying ; Wang, Junfeng ; Yin, Dawei ; Ma, Jinwen</creator><creatorcontrib>Deng, Jingyang ; Shen, Zhengyang ; Wang, Boyang ; Su, Lixin ; Cheng, Suqi ; Nie, Ying ; Wang, Junfeng ; Yin, Dawei ; Ma, Jinwen</creatorcontrib><description>The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, where crucial middle-context information is likely to be missed, and the distraction issue that the models lose focus due to overly extended contexts. To address these challenges, we propose the Context Filtering Language Model (FltLM), a novel integrated Long-Context LLM which enhances the ability of the model on multi-document question-answering (QA) tasks. Specifically, FltLM innovatively incorporates a context filter with a soft mask mechanism, identifying and dynamically excluding irrelevant content to concentrate on pertinent information for better comprehension and reasoning. Our approach not only mitigates these two challenges, but also enables the model to operate conveniently in a single forward pass. Experimental results demonstrate that FltLM significantly outperforms supervised fine-tuning and retrieval-based methods in complex QA scenarios, suggesting a promising solution for more accurate and reliable long-context natural language understanding applications.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Context ; Documents ; Filtration ; Language ; Large language models ; Natural language ; Natural language processing ; Task complexity</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3115225042?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Deng, Jingyang</creatorcontrib><creatorcontrib>Shen, Zhengyang</creatorcontrib><creatorcontrib>Wang, Boyang</creatorcontrib><creatorcontrib>Su, Lixin</creatorcontrib><creatorcontrib>Cheng, Suqi</creatorcontrib><creatorcontrib>Nie, Ying</creatorcontrib><creatorcontrib>Wang, Junfeng</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><creatorcontrib>Ma, Jinwen</creatorcontrib><title>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</title><title>arXiv.org</title><description>The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, where crucial middle-context information is likely to be missed, and the distraction issue that the models lose focus due to overly extended contexts. To address these challenges, we propose the Context Filtering Language Model (FltLM), a novel integrated Long-Context LLM which enhances the ability of the model on multi-document question-answering (QA) tasks. Specifically, FltLM innovatively incorporates a context filter with a soft mask mechanism, identifying and dynamically excluding irrelevant content to concentrate on pertinent information for better comprehension and reasoning. Our approach not only mitigates these two challenges, but also enables the model to operate conveniently in a single forward pass. Experimental results demonstrate that FltLM significantly outperforms supervised fine-tuning and retrieval-based methods in complex QA scenarios, suggesting a promising solution for more accurate and reliable long-context natural language understanding applications.</description><subject>Context</subject><subject>Documents</subject><subject>Filtration</subject><subject>Language</subject><subject>Large language models</subject><subject>Natural language</subject><subject>Natural language processing</subject><subject>Task complexity</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjLEKwjAYhIMgWLTv8INzoU0aFTcpLQrtpnMJ9m9ICYkmqfj4ZtDd5e747rgFSShjRXYoKV2R1Pspz3O621POWUJUo0PbHeFk4GICOulEwAFaa2RW2UjeAVrhJEY1chYxdHZADaN1UI8j3oN6IfymjdLxRBkJwgxwMwM6H2KMZEOWo9Ae06-vybapr9U5ezj7nNGHfrKzM7HqWVFwSnleUvbf6gP-rEdN</recordid><startdate>20241009</startdate><enddate>20241009</enddate><creator>Deng, Jingyang</creator><creator>Shen, Zhengyang</creator><creator>Wang, Boyang</creator><creator>Su, Lixin</creator><creator>Cheng, Suqi</creator><creator>Nie, Ying</creator><creator>Wang, Junfeng</creator><creator>Yin, Dawei</creator><creator>Ma, Jinwen</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241009</creationdate><title>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</title><author>Deng, Jingyang ; Shen, Zhengyang ; Wang, Boyang ; Su, Lixin ; Cheng, Suqi ; Nie, Ying ; Wang, Junfeng ; Yin, Dawei ; Ma, Jinwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31152250423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Context</topic><topic>Documents</topic><topic>Filtration</topic><topic>Language</topic><topic>Large language models</topic><topic>Natural language</topic><topic>Natural language processing</topic><topic>Task complexity</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Jingyang</creatorcontrib><creatorcontrib>Shen, Zhengyang</creatorcontrib><creatorcontrib>Wang, Boyang</creatorcontrib><creatorcontrib>Su, Lixin</creatorcontrib><creatorcontrib>Cheng, Suqi</creatorcontrib><creatorcontrib>Nie, Ying</creatorcontrib><creatorcontrib>Wang, Junfeng</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><creatorcontrib>Ma, Jinwen</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deng, Jingyang</au><au>Shen, Zhengyang</au><au>Wang, Boyang</au><au>Su, Lixin</au><au>Cheng, Suqi</au><au>Nie, Ying</au><au>Wang, Junfeng</au><au>Yin, Dawei</au><au>Ma, Jinwen</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</atitle><jtitle>arXiv.org</jtitle><date>2024-10-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, where crucial middle-context information is likely to be missed, and the distraction issue that the models lose focus due to overly extended contexts. To address these challenges, we propose the Context Filtering Language Model (FltLM), a novel integrated Long-Context LLM which enhances the ability of the model on multi-document question-answering (QA) tasks. Specifically, FltLM innovatively incorporates a context filter with a soft mask mechanism, identifying and dynamically excluding irrelevant content to concentrate on pertinent information for better comprehension and reasoning. Our approach not only mitigates these two challenges, but also enables the model to operate conveniently in a single forward pass. Experimental results demonstrate that FltLM significantly outperforms supervised fine-tuning and retrieval-based methods in complex QA scenarios, suggesting a promising solution for more accurate and reliable long-context natural language understanding applications.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_3115225042
source Publicly Available Content Database
subjects Context
Documents
Filtration
Language
Large language models
Natural language
Natural language processing
Task complexity
title FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T21%3A10%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FltLM:%20An%20Intergrated%20Long-Context%20Large%20Language%20Model%20for%20Effective%20Context%20Filtering%20and%20Understanding&rft.jtitle=arXiv.org&rft.au=Deng,%20Jingyang&rft.date=2024-10-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3115225042%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31152250423%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3115225042&rft_id=info:pmid/&rfr_iscdi=true