Loading…
Mobile Edge Intelligence for Large Language Models: A Contemporary Survey
On-device large language models (LLMs), referring to running LLMs on edge devices, have raised considerable interest since they are more cost-effective, latency-efficient, and privacy-preserving compared with the cloud paradigm. Nonetheless, the performance of on-device LLMs is intrinsically constra...
Saved in:
Published in: | IEEE Communications surveys and tutorials 2025-01, p.1-1 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 1 |
container_issue | |
container_start_page | 1 |
container_title | IEEE Communications surveys and tutorials |
container_volume | |
creator | Qu, Guanqiao Chen, Qiyuan Wei, Wei Lin, Zheng Chen, Xianhao Huang, Kaibin |
description | On-device large language models (LLMs), referring to running LLMs on edge devices, have raised considerable interest since they are more cost-effective, latency-efficient, and privacy-preserving compared with the cloud paradigm. Nonetheless, the performance of on-device LLMs is intrinsically constrained by resource limitations on edge devices. Sitting between cloud and on-device AI, mobile edge intelligence (MEI) presents a viable solution by provisioning AI capabilities at the edge of mobile networks. This article provides a contemporary survey on harnessing MEI for LLMs. We begin by illustrating several killer applications to demonstrate the urgent need for deploying LLMs at the network edge. Next, we present the preliminaries of LLMs and MEI, followed by resource-efficient LLM techniques. We then present an architectural overview of MEI for LLMs (MEI4LLM), outlining its core components and how it supports the deployment of LLMs. Subsequently, we delve into various aspects of MEI4LLM, extensively covering edge LLM caching and delivery, edge LLM training, and edge LLM inference. Finally, we identify future research opportunities. We hope this article inspires researchers in the field to leverage mobile edge computing to facilitate LLM deployment, thereby unleashing the potential of LLMs across various privacy-and delay-sensitive applications. |
doi_str_mv | 10.1109/COMST.2025.3527641 |
format | article |
fullrecord | <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_crossref_primary_10_1109_COMST_2025_3527641</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10835069</ieee_id><sourcerecordid>10_1109_COMST_2025_3527641</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1081-8a84aec02ee6d2893cb43a620602f5118e536c2f83c35a954ddbe08a93e9896b3</originalsourceid><addsrcrecordid>eNpNkMFKw0AQhhdRsFZfQDzsC6TO7uxudr2VULWQ0kMr9BY2m0mJpE3ZWKFvb7Q9eJphmO_n52PsUcBECHDP2XKxWk8kSD1BLVOjxBUbSUwxSZXeXLOR0BoTm6abW3bX958ASioHIzZfdGXTEp9VW-Lz_Re1bbOlfSBed5HnPg7n3O-3Rz8si66itn_hU551w-vu0EUfT3x1jN90umc3tW97erjMMft4na2z9yRfvs2zaZ4EAVYk1lvlKYAkMpW0DkOp0BsJBmSthbCk0QRZWwyovdOqqkoC6x2Ss86UOGbynBti1_eR6uIQm93QoxBQ_Moo_mQUvzKKi4wBejpDDRH9AyxqMA5_AKy1Wx4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Mobile Edge Intelligence for Large Language Models: A Contemporary Survey</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Qu, Guanqiao ; Chen, Qiyuan ; Wei, Wei ; Lin, Zheng ; Chen, Xianhao ; Huang, Kaibin</creator><creatorcontrib>Qu, Guanqiao ; Chen, Qiyuan ; Wei, Wei ; Lin, Zheng ; Chen, Xianhao ; Huang, Kaibin</creatorcontrib><description>On-device large language models (LLMs), referring to running LLMs on edge devices, have raised considerable interest since they are more cost-effective, latency-efficient, and privacy-preserving compared with the cloud paradigm. Nonetheless, the performance of on-device LLMs is intrinsically constrained by resource limitations on edge devices. Sitting between cloud and on-device AI, mobile edge intelligence (MEI) presents a viable solution by provisioning AI capabilities at the edge of mobile networks. This article provides a contemporary survey on harnessing MEI for LLMs. We begin by illustrating several killer applications to demonstrate the urgent need for deploying LLMs at the network edge. Next, we present the preliminaries of LLMs and MEI, followed by resource-efficient LLM techniques. We then present an architectural overview of MEI for LLMs (MEI4LLM), outlining its core components and how it supports the deployment of LLMs. Subsequently, we delve into various aspects of MEI4LLM, extensively covering edge LLM caching and delivery, edge LLM training, and edge LLM inference. Finally, we identify future research opportunities. We hope this article inspires researchers in the field to leverage mobile edge computing to facilitate LLM deployment, thereby unleashing the potential of LLMs across various privacy-and delay-sensitive applications.</description><identifier>ISSN: 1553-877X</identifier><identifier>EISSN: 2373-745X</identifier><identifier>DOI: 10.1109/COMST.2025.3527641</identifier><language>eng</language><publisher>IEEE</publisher><subject>6G mobile communication ; Artificial intelligence ; Bandwidth ; edge intelligence ; foundation models ; Image edge detection ; Internet ; Large language models ; mobile edge computing ; Reviews ; Servers ; split learning ; Surveys ; Training ; Tutorials</subject><ispartof>IEEE Communications surveys and tutorials, 2025-01, p.1-1</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-4463-5652 ; 0000-0002-4295-940X ; 0000-0001-8773-4629</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10835069$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,27900,27901,54770</link.rule.ids></links><search><creatorcontrib>Qu, Guanqiao</creatorcontrib><creatorcontrib>Chen, Qiyuan</creatorcontrib><creatorcontrib>Wei, Wei</creatorcontrib><creatorcontrib>Lin, Zheng</creatorcontrib><creatorcontrib>Chen, Xianhao</creatorcontrib><creatorcontrib>Huang, Kaibin</creatorcontrib><title>Mobile Edge Intelligence for Large Language Models: A Contemporary Survey</title><title>IEEE Communications surveys and tutorials</title><addtitle>COMST</addtitle><description>On-device large language models (LLMs), referring to running LLMs on edge devices, have raised considerable interest since they are more cost-effective, latency-efficient, and privacy-preserving compared with the cloud paradigm. Nonetheless, the performance of on-device LLMs is intrinsically constrained by resource limitations on edge devices. Sitting between cloud and on-device AI, mobile edge intelligence (MEI) presents a viable solution by provisioning AI capabilities at the edge of mobile networks. This article provides a contemporary survey on harnessing MEI for LLMs. We begin by illustrating several killer applications to demonstrate the urgent need for deploying LLMs at the network edge. Next, we present the preliminaries of LLMs and MEI, followed by resource-efficient LLM techniques. We then present an architectural overview of MEI for LLMs (MEI4LLM), outlining its core components and how it supports the deployment of LLMs. Subsequently, we delve into various aspects of MEI4LLM, extensively covering edge LLM caching and delivery, edge LLM training, and edge LLM inference. Finally, we identify future research opportunities. We hope this article inspires researchers in the field to leverage mobile edge computing to facilitate LLM deployment, thereby unleashing the potential of LLMs across various privacy-and delay-sensitive applications.</description><subject>6G mobile communication</subject><subject>Artificial intelligence</subject><subject>Bandwidth</subject><subject>edge intelligence</subject><subject>foundation models</subject><subject>Image edge detection</subject><subject>Internet</subject><subject>Large language models</subject><subject>mobile edge computing</subject><subject>Reviews</subject><subject>Servers</subject><subject>split learning</subject><subject>Surveys</subject><subject>Training</subject><subject>Tutorials</subject><issn>1553-877X</issn><issn>2373-745X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNpNkMFKw0AQhhdRsFZfQDzsC6TO7uxudr2VULWQ0kMr9BY2m0mJpE3ZWKFvb7Q9eJphmO_n52PsUcBECHDP2XKxWk8kSD1BLVOjxBUbSUwxSZXeXLOR0BoTm6abW3bX958ASioHIzZfdGXTEp9VW-Lz_Re1bbOlfSBed5HnPg7n3O-3Rz8si66itn_hU551w-vu0EUfT3x1jN90umc3tW97erjMMft4na2z9yRfvs2zaZ4EAVYk1lvlKYAkMpW0DkOp0BsJBmSthbCk0QRZWwyovdOqqkoC6x2Ss86UOGbynBti1_eR6uIQm93QoxBQ_Moo_mQUvzKKi4wBejpDDRH9AyxqMA5_AKy1Wx4</recordid><startdate>20250108</startdate><enddate>20250108</enddate><creator>Qu, Guanqiao</creator><creator>Chen, Qiyuan</creator><creator>Wei, Wei</creator><creator>Lin, Zheng</creator><creator>Chen, Xianhao</creator><creator>Huang, Kaibin</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-4463-5652</orcidid><orcidid>https://orcid.org/0000-0002-4295-940X</orcidid><orcidid>https://orcid.org/0000-0001-8773-4629</orcidid></search><sort><creationdate>20250108</creationdate><title>Mobile Edge Intelligence for Large Language Models: A Contemporary Survey</title><author>Qu, Guanqiao ; Chen, Qiyuan ; Wei, Wei ; Lin, Zheng ; Chen, Xianhao ; Huang, Kaibin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1081-8a84aec02ee6d2893cb43a620602f5118e536c2f83c35a954ddbe08a93e9896b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>6G mobile communication</topic><topic>Artificial intelligence</topic><topic>Bandwidth</topic><topic>edge intelligence</topic><topic>foundation models</topic><topic>Image edge detection</topic><topic>Internet</topic><topic>Large language models</topic><topic>mobile edge computing</topic><topic>Reviews</topic><topic>Servers</topic><topic>split learning</topic><topic>Surveys</topic><topic>Training</topic><topic>Tutorials</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qu, Guanqiao</creatorcontrib><creatorcontrib>Chen, Qiyuan</creatorcontrib><creatorcontrib>Wei, Wei</creatorcontrib><creatorcontrib>Lin, Zheng</creatorcontrib><creatorcontrib>Chen, Xianhao</creatorcontrib><creatorcontrib>Huang, Kaibin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><jtitle>IEEE Communications surveys and tutorials</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qu, Guanqiao</au><au>Chen, Qiyuan</au><au>Wei, Wei</au><au>Lin, Zheng</au><au>Chen, Xianhao</au><au>Huang, Kaibin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mobile Edge Intelligence for Large Language Models: A Contemporary Survey</atitle><jtitle>IEEE Communications surveys and tutorials</jtitle><stitle>COMST</stitle><date>2025-01-08</date><risdate>2025</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1553-877X</issn><eissn>2373-745X</eissn><abstract>On-device large language models (LLMs), referring to running LLMs on edge devices, have raised considerable interest since they are more cost-effective, latency-efficient, and privacy-preserving compared with the cloud paradigm. Nonetheless, the performance of on-device LLMs is intrinsically constrained by resource limitations on edge devices. Sitting between cloud and on-device AI, mobile edge intelligence (MEI) presents a viable solution by provisioning AI capabilities at the edge of mobile networks. This article provides a contemporary survey on harnessing MEI for LLMs. We begin by illustrating several killer applications to demonstrate the urgent need for deploying LLMs at the network edge. Next, we present the preliminaries of LLMs and MEI, followed by resource-efficient LLM techniques. We then present an architectural overview of MEI for LLMs (MEI4LLM), outlining its core components and how it supports the deployment of LLMs. Subsequently, we delve into various aspects of MEI4LLM, extensively covering edge LLM caching and delivery, edge LLM training, and edge LLM inference. Finally, we identify future research opportunities. We hope this article inspires researchers in the field to leverage mobile edge computing to facilitate LLM deployment, thereby unleashing the potential of LLMs across various privacy-and delay-sensitive applications.</abstract><pub>IEEE</pub><doi>10.1109/COMST.2025.3527641</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-4463-5652</orcidid><orcidid>https://orcid.org/0000-0002-4295-940X</orcidid><orcidid>https://orcid.org/0000-0001-8773-4629</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1553-877X |
ispartof | IEEE Communications surveys and tutorials, 2025-01, p.1-1 |
issn | 1553-877X 2373-745X |
language | eng |
recordid | cdi_crossref_primary_10_1109_COMST_2025_3527641 |
source | IEEE Electronic Library (IEL) Journals |
subjects | 6G mobile communication Artificial intelligence Bandwidth edge intelligence foundation models Image edge detection Internet Large language models mobile edge computing Reviews Servers split learning Surveys Training Tutorials |
title | Mobile Edge Intelligence for Large Language Models: A Contemporary Survey |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-24T10%3A07%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mobile%20Edge%20Intelligence%20for%20Large%20Language%20Models:%20A%20Contemporary%20Survey&rft.jtitle=IEEE%20Communications%20surveys%20and%20tutorials&rft.au=Qu,%20Guanqiao&rft.date=2025-01-08&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1553-877X&rft.eissn=2373-745X&rft_id=info:doi/10.1109/COMST.2025.3527641&rft_dat=%3Ccrossref_ieee_%3E10_1109_COMST_2025_3527641%3C/crossref_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c1081-8a84aec02ee6d2893cb43a620602f5118e536c2f83c35a954ddbe08a93e9896b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10835069&rfr_iscdi=true |