Loading…

Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms

Recent advancements in deep learning have led to various challenges, one of which is the issue of data privacy in training data. To address this issue, federated learning, a technique that merges models trained by clients on servers, has emerged as an attractive solution. However, federated learning...

Full description

Saved in:
Bibliographic Details
Published in:Mathematics (Basel) 2023-11, Vol.11 (21), p.4479
Main Authors: Kim, Gyunyeop, Yoo, Joon, Kang, Sangwoo
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c363t-f89b53ac680fde9c1b41468a81e801421012483d485d0e85e80e1a1bf275b48e3
container_end_page
container_issue 21
container_start_page 4479
container_title Mathematics (Basel)
container_volume 11
creator Kim, Gyunyeop
Yoo, Joon
Kang, Sangwoo
description Recent advancements in deep learning have led to various challenges, one of which is the issue of data privacy in training data. To address this issue, federated learning, a technique that merges models trained by clients on servers, has emerged as an attractive solution. However, federated learning faces challenges related to data heterogeneity and system heterogeneity. Recent observations suggest that incorporating pre-trained models into federated learning can mitigate some of these challenges. Nonetheless, the main drawback of pre-trained models lies in their typically large model size, leading to excessive data transmission when clients send these models to the server. Additionally, federated learning involves multiple global steps, which means transmitting a large language model to multiple clients results in too much data exchange. In this paper, we propose a novel approach to address this challenge using adapters. Adapters demonstrate training efficiency by training a small capacity adapter layer alongside a large language model. This unique characteristic reduces the volume of data transmission, offering a practical solution to the problem. The evaluation results demonstrate that the proposed method achieves a reduction in training time of approximately 20–40% and a transmission speed improvement of over 98% compared to previous approaches.
doi_str_mv 10.3390/math11214479
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_8b526a7376044a88bc80d885df0dde07</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A772534885</galeid><doaj_id>oai_doaj_org_article_8b526a7376044a88bc80d885df0dde07</doaj_id><sourcerecordid>A772534885</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-f89b53ac680fde9c1b41468a81e801421012483d485d0e85e80e1a1bf275b48e3</originalsourceid><addsrcrecordid>eNpNkd1qGzEQhZfSQkOSuz7AQm-7qf52NXtpQtIGHFpoci1mpdFaxpZcrdzSt49clxIJpOHMOR8D0zQfOLuRcmSf91g2nAuulB7fNBdCCN3p2nj7qn7fXC_LltUzcglqvGjsnffBBoqlvSdHGQu5dk2YY4hz-zuUTfs9U_eUMcRTB_NM9Y3zEWvxmBzt2ufl5P1Bv2p8164cHgrl9pHsBmNY9stV887jbqHrf_9l83x_93T7tVt_-_Jwu1p3Vg6ydB7GqZdoB2De0Wj5pLgaAIETMK4EZ1wokE5B7xhBX1XiyCcvdD8pIHnZPJy5LuHWHHLYY_5jEgbzV0h5NphLsDsyMPViQC31wJRCgMkCc1DBnjlHTFfWxzPrkNPPIy3FbNMxxzq-EQAge9FrXl03Z9eMFRqiTyWjrdfRPtgUyYeqr7QWvVQVXwOfzgGb07Jk8v_H5Myc1mher1G-AJmzji8</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2888352571</pqid></control><display><type>article</type><title>Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms</title><source>Publicly Available Content Database</source><creator>Kim, Gyunyeop ; Yoo, Joon ; Kang, Sangwoo</creator><creatorcontrib>Kim, Gyunyeop ; Yoo, Joon ; Kang, Sangwoo</creatorcontrib><description>Recent advancements in deep learning have led to various challenges, one of which is the issue of data privacy in training data. To address this issue, federated learning, a technique that merges models trained by clients on servers, has emerged as an attractive solution. However, federated learning faces challenges related to data heterogeneity and system heterogeneity. Recent observations suggest that incorporating pre-trained models into federated learning can mitigate some of these challenges. Nonetheless, the main drawback of pre-trained models lies in their typically large model size, leading to excessive data transmission when clients send these models to the server. Additionally, federated learning involves multiple global steps, which means transmitting a large language model to multiple clients results in too much data exchange. In this paper, we propose a novel approach to address this challenge using adapters. Adapters demonstrate training efficiency by training a small capacity adapter layer alongside a large language model. This unique characteristic reduces the volume of data transmission, offering a practical solution to the problem. The evaluation results demonstrate that the proposed method achieves a reduction in training time of approximately 20–40% and a transmission speed improvement of over 98% compared to previous approaches.</description><identifier>ISSN: 2227-7390</identifier><identifier>EISSN: 2227-7390</identifier><identifier>DOI: 10.3390/math11214479</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>adapter transformer ; Adapters ; Clients ; Computational linguistics ; Costs ; Data exchange ; Data transmission ; Datasets ; Deep learning ; Federated learning ; File servers ; Heterogeneity ; Language ; Language processing ; Large language models ; Machine learning ; Natural language interfaces ; Natural language processing ; Privacy ; transfer learning</subject><ispartof>Mathematics (Basel), 2023-11, Vol.11 (21), p.4479</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c363t-f89b53ac680fde9c1b41468a81e801421012483d485d0e85e80e1a1bf275b48e3</cites><orcidid>0000-0002-9520-5855 ; 0000-0002-0281-1726 ; 0000-0002-9604-8134</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2888352571/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2888352571?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Kim, Gyunyeop</creatorcontrib><creatorcontrib>Yoo, Joon</creatorcontrib><creatorcontrib>Kang, Sangwoo</creatorcontrib><title>Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms</title><title>Mathematics (Basel)</title><description>Recent advancements in deep learning have led to various challenges, one of which is the issue of data privacy in training data. To address this issue, federated learning, a technique that merges models trained by clients on servers, has emerged as an attractive solution. However, federated learning faces challenges related to data heterogeneity and system heterogeneity. Recent observations suggest that incorporating pre-trained models into federated learning can mitigate some of these challenges. Nonetheless, the main drawback of pre-trained models lies in their typically large model size, leading to excessive data transmission when clients send these models to the server. Additionally, federated learning involves multiple global steps, which means transmitting a large language model to multiple clients results in too much data exchange. In this paper, we propose a novel approach to address this challenge using adapters. Adapters demonstrate training efficiency by training a small capacity adapter layer alongside a large language model. This unique characteristic reduces the volume of data transmission, offering a practical solution to the problem. The evaluation results demonstrate that the proposed method achieves a reduction in training time of approximately 20–40% and a transmission speed improvement of over 98% compared to previous approaches.</description><subject>adapter transformer</subject><subject>Adapters</subject><subject>Clients</subject><subject>Computational linguistics</subject><subject>Costs</subject><subject>Data exchange</subject><subject>Data transmission</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Federated learning</subject><subject>File servers</subject><subject>Heterogeneity</subject><subject>Language</subject><subject>Language processing</subject><subject>Large language models</subject><subject>Machine learning</subject><subject>Natural language interfaces</subject><subject>Natural language processing</subject><subject>Privacy</subject><subject>transfer learning</subject><issn>2227-7390</issn><issn>2227-7390</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNkd1qGzEQhZfSQkOSuz7AQm-7qf52NXtpQtIGHFpoci1mpdFaxpZcrdzSt49clxIJpOHMOR8D0zQfOLuRcmSf91g2nAuulB7fNBdCCN3p2nj7qn7fXC_LltUzcglqvGjsnffBBoqlvSdHGQu5dk2YY4hz-zuUTfs9U_eUMcRTB_NM9Y3zEWvxmBzt2ufl5P1Bv2p8164cHgrl9pHsBmNY9stV887jbqHrf_9l83x_93T7tVt_-_Jwu1p3Vg6ydB7GqZdoB2De0Wj5pLgaAIETMK4EZ1wokE5B7xhBX1XiyCcvdD8pIHnZPJy5LuHWHHLYY_5jEgbzV0h5NphLsDsyMPViQC31wJRCgMkCc1DBnjlHTFfWxzPrkNPPIy3FbNMxxzq-EQAge9FrXl03Z9eMFRqiTyWjrdfRPtgUyYeqr7QWvVQVXwOfzgGb07Jk8v_H5Myc1mher1G-AJmzji8</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Kim, Gyunyeop</creator><creator>Yoo, Joon</creator><creator>Kang, Sangwoo</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7TB</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M7S</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>Q9U</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-9520-5855</orcidid><orcidid>https://orcid.org/0000-0002-0281-1726</orcidid><orcidid>https://orcid.org/0000-0002-9604-8134</orcidid></search><sort><creationdate>20231101</creationdate><title>Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms</title><author>Kim, Gyunyeop ; Yoo, Joon ; Kang, Sangwoo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-f89b53ac680fde9c1b41468a81e801421012483d485d0e85e80e1a1bf275b48e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>adapter transformer</topic><topic>Adapters</topic><topic>Clients</topic><topic>Computational linguistics</topic><topic>Costs</topic><topic>Data exchange</topic><topic>Data transmission</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Federated learning</topic><topic>File servers</topic><topic>Heterogeneity</topic><topic>Language</topic><topic>Language processing</topic><topic>Large language models</topic><topic>Machine learning</topic><topic>Natural language interfaces</topic><topic>Natural language processing</topic><topic>Privacy</topic><topic>transfer learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Gyunyeop</creatorcontrib><creatorcontrib>Yoo, Joon</creatorcontrib><creatorcontrib>Kang, Sangwoo</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering collection</collection><collection>ProQuest Central Basic</collection><collection>Directory of Open Access Journals</collection><jtitle>Mathematics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Gyunyeop</au><au>Yoo, Joon</au><au>Kang, Sangwoo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms</atitle><jtitle>Mathematics (Basel)</jtitle><date>2023-11-01</date><risdate>2023</risdate><volume>11</volume><issue>21</issue><spage>4479</spage><pages>4479-</pages><issn>2227-7390</issn><eissn>2227-7390</eissn><abstract>Recent advancements in deep learning have led to various challenges, one of which is the issue of data privacy in training data. To address this issue, federated learning, a technique that merges models trained by clients on servers, has emerged as an attractive solution. However, federated learning faces challenges related to data heterogeneity and system heterogeneity. Recent observations suggest that incorporating pre-trained models into federated learning can mitigate some of these challenges. Nonetheless, the main drawback of pre-trained models lies in their typically large model size, leading to excessive data transmission when clients send these models to the server. Additionally, federated learning involves multiple global steps, which means transmitting a large language model to multiple clients results in too much data exchange. In this paper, we propose a novel approach to address this challenge using adapters. Adapters demonstrate training efficiency by training a small capacity adapter layer alongside a large language model. This unique characteristic reduces the volume of data transmission, offering a practical solution to the problem. The evaluation results demonstrate that the proposed method achieves a reduction in training time of approximately 20–40% and a transmission speed improvement of over 98% compared to previous approaches.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/math11214479</doi><orcidid>https://orcid.org/0000-0002-9520-5855</orcidid><orcidid>https://orcid.org/0000-0002-0281-1726</orcidid><orcidid>https://orcid.org/0000-0002-9604-8134</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2227-7390
ispartof Mathematics (Basel), 2023-11, Vol.11 (21), p.4479
issn 2227-7390
2227-7390
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_8b526a7376044a88bc80d885df0dde07
source Publicly Available Content Database
subjects adapter transformer
Adapters
Clients
Computational linguistics
Costs
Data exchange
Data transmission
Datasets
Deep learning
Federated learning
File servers
Heterogeneity
Language
Language processing
Large language models
Machine learning
Natural language interfaces
Natural language processing
Privacy
transfer learning
title Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T12%3A41%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Efficient%20Federated%20Learning%20with%20Pre-Trained%20Large%20Language%20Model%20Using%20Several%20Adapter%20Mechanisms&rft.jtitle=Mathematics%20(Basel)&rft.au=Kim,%20Gyunyeop&rft.date=2023-11-01&rft.volume=11&rft.issue=21&rft.spage=4479&rft.pages=4479-&rft.issn=2227-7390&rft.eissn=2227-7390&rft_id=info:doi/10.3390/math11214479&rft_dat=%3Cgale_doaj_%3EA772534885%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c363t-f89b53ac680fde9c1b41468a81e801421012483d485d0e85e80e1a1bf275b48e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2888352571&rft_id=info:pmid/&rft_galeid=A772534885&rfr_iscdi=true