Loading…

Federated Freeze BERT for text classification

Pre-trained BERT models have demonstrated exceptional performance in the context of text classification tasks. Certain problem domains necessitate data distribution without data sharing. Federated Learning (FL) allows multiple clients to collectively train a global model by sharing learned models ra...

Full description

Saved in:
Bibliographic Details
Published in:Journal of big data 2024-12, Vol.11 (1), p.28-16, Article 28
Main Authors: Galal, Omar, Abdel-Gawad, Ahmed H., Farouk, Mona
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c453t-5d0cbe2be84566a6d1ccc045f2460ceaa8fe8eafc4ba9b66824198009339f1713
container_end_page 16
container_issue 1
container_start_page 28
container_title Journal of big data
container_volume 11
creator Galal, Omar
Abdel-Gawad, Ahmed H.
Farouk, Mona
description Pre-trained BERT models have demonstrated exceptional performance in the context of text classification tasks. Certain problem domains necessitate data distribution without data sharing. Federated Learning (FL) allows multiple clients to collectively train a global model by sharing learned models rather than raw data. However, the adoption of BERT, a large model, within a Federated Learning framework incurs substantial communication costs. To address this challenge, we propose a novel framework, FedFreezeBERT, for BERT-based text classification. FedFreezeBERT works by adding an aggregation architecture on top of BERT to obtain better sentence embedding for classification while freezing BERT parameters. Keeping the model parameters frozen, FedFreezeBERT reduces the communication costs by a large factor compared to other state-of-the-art methods. FedFreezeBERT is implemented in a distributed version where the aggregation architecture only is being transferred and aggregated by FL algorithms such as FedAvg or FedProx. FedFreezeBERT is also implemented in a centralized version where the data embeddings extracted by BERT are sent to the central server to train the aggregation architecture. The experiments show that FedFreezeBERT achieves new state-of-the-art performance on Arabic sentiment analysis on the ArSarcasm-v2 dataset with a 12.9% and 1.2% improvement over FedAvg/FedProx and the previous SOTA respectively. FedFreezeBERT also reduces the communication cost by 5 × compared to the previous SOTA.
doi_str_mv 10.1186/s40537-024-00885-x
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_85becda319d847ada25c0cdcd29693f0</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_85becda319d847ada25c0cdcd29693f0</doaj_id><sourcerecordid>2924079947</sourcerecordid><originalsourceid>FETCH-LOGICAL-c453t-5d0cbe2be84566a6d1ccc045f2460ceaa8fe8eafc4ba9b66824198009339f1713</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWGr_gKcFz6uTz02OWlotFASp55BNJmVL7dZkheqvd-2KevKUIbzPM8NLyCWFa0q1uskCJK9KYKIE0FqWhxMyYtSoklIqT__M52SS8wYAKO8ZJUaknGPA5DoMxTwhfmBxN3taFbFNRYeHrvBbl3MTG--6pt1dkLPothkn3--YPM9nq-lDuXy8X0xvl6UXknelDOBrZDVqIZVyKlDvPQgZmVDg0TkdUaOLXtTO1EppJqjRAIZzE2lF-ZgsBm9o3cbuU_Pi0rttXWOPH21aW5e6xm_RalmjD45TE7SoXHBMevDBB2aU4RF619Xg2qf29Q1zZzftW9r151tmmIDKGFH1KTakfGpzThh_tlKwXy3boWXbt2yPLdtDD_EByn14t8b0q_6H-gTXkH6K</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2924079947</pqid></control><display><type>article</type><title>Federated Freeze BERT for text classification</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><source>Social Science Premium Collection</source><source>ABI/INFORM Global</source><source>Springer Nature - SpringerLink Journals - Fully Open Access </source><creator>Galal, Omar ; Abdel-Gawad, Ahmed H. ; Farouk, Mona</creator><creatorcontrib>Galal, Omar ; Abdel-Gawad, Ahmed H. ; Farouk, Mona</creatorcontrib><description>Pre-trained BERT models have demonstrated exceptional performance in the context of text classification tasks. Certain problem domains necessitate data distribution without data sharing. Federated Learning (FL) allows multiple clients to collectively train a global model by sharing learned models rather than raw data. However, the adoption of BERT, a large model, within a Federated Learning framework incurs substantial communication costs. To address this challenge, we propose a novel framework, FedFreezeBERT, for BERT-based text classification. FedFreezeBERT works by adding an aggregation architecture on top of BERT to obtain better sentence embedding for classification while freezing BERT parameters. Keeping the model parameters frozen, FedFreezeBERT reduces the communication costs by a large factor compared to other state-of-the-art methods. FedFreezeBERT is implemented in a distributed version where the aggregation architecture only is being transferred and aggregated by FL algorithms such as FedAvg or FedProx. FedFreezeBERT is also implemented in a centralized version where the data embeddings extracted by BERT are sent to the central server to train the aggregation architecture. The experiments show that FedFreezeBERT achieves new state-of-the-art performance on Arabic sentiment analysis on the ArSarcasm-v2 dataset with a 12.9% and 1.2% improvement over FedAvg/FedProx and the previous SOTA respectively. FedFreezeBERT also reduces the communication cost by 5 × compared to the previous SOTA.</description><identifier>ISSN: 2196-1115</identifier><identifier>EISSN: 2196-1115</identifier><identifier>DOI: 10.1186/s40537-024-00885-x</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Algorithms ; BERT ; Big Data ; Classification ; Communication ; Communications Engineering ; Computational Science and Engineering ; Computer Science ; Data mining ; Data Mining and Knowledge Discovery ; Database Management ; Federated Learning (FL) ; Freezing ; Information Storage and Retrieval ; Machine learning ; Mathematical Applications in Computer Science ; Mathematical models ; Natural Language Processing (NLP) ; Networks ; Parameters ; Pre-trained Language Models ; Sentiment analysis ; Text categorization ; Text classification</subject><ispartof>Journal of big data, 2024-12, Vol.11 (1), p.28-16, Article 28</ispartof><rights>The Author(s) 2024</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c453t-5d0cbe2be84566a6d1ccc045f2460ceaa8fe8eafc4ba9b66824198009339f1713</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2924079947?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,11667,21373,25731,27901,27902,33588,36037,36989,43709,44339,44566</link.rule.ids></links><search><creatorcontrib>Galal, Omar</creatorcontrib><creatorcontrib>Abdel-Gawad, Ahmed H.</creatorcontrib><creatorcontrib>Farouk, Mona</creatorcontrib><title>Federated Freeze BERT for text classification</title><title>Journal of big data</title><addtitle>J Big Data</addtitle><description>Pre-trained BERT models have demonstrated exceptional performance in the context of text classification tasks. Certain problem domains necessitate data distribution without data sharing. Federated Learning (FL) allows multiple clients to collectively train a global model by sharing learned models rather than raw data. However, the adoption of BERT, a large model, within a Federated Learning framework incurs substantial communication costs. To address this challenge, we propose a novel framework, FedFreezeBERT, for BERT-based text classification. FedFreezeBERT works by adding an aggregation architecture on top of BERT to obtain better sentence embedding for classification while freezing BERT parameters. Keeping the model parameters frozen, FedFreezeBERT reduces the communication costs by a large factor compared to other state-of-the-art methods. FedFreezeBERT is implemented in a distributed version where the aggregation architecture only is being transferred and aggregated by FL algorithms such as FedAvg or FedProx. FedFreezeBERT is also implemented in a centralized version where the data embeddings extracted by BERT are sent to the central server to train the aggregation architecture. The experiments show that FedFreezeBERT achieves new state-of-the-art performance on Arabic sentiment analysis on the ArSarcasm-v2 dataset with a 12.9% and 1.2% improvement over FedAvg/FedProx and the previous SOTA respectively. FedFreezeBERT also reduces the communication cost by 5 × compared to the previous SOTA.</description><subject>Algorithms</subject><subject>BERT</subject><subject>Big Data</subject><subject>Classification</subject><subject>Communication</subject><subject>Communications Engineering</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data mining</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Database Management</subject><subject>Federated Learning (FL)</subject><subject>Freezing</subject><subject>Information Storage and Retrieval</subject><subject>Machine learning</subject><subject>Mathematical Applications in Computer Science</subject><subject>Mathematical models</subject><subject>Natural Language Processing (NLP)</subject><subject>Networks</subject><subject>Parameters</subject><subject>Pre-trained Language Models</subject><subject>Sentiment analysis</subject><subject>Text categorization</subject><subject>Text classification</subject><issn>2196-1115</issn><issn>2196-1115</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ALSLI</sourceid><sourceid>M0C</sourceid><sourceid>M2R</sourceid><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9kE1LAzEQhoMoWGr_gKcFz6uTz02OWlotFASp55BNJmVL7dZkheqvd-2KevKUIbzPM8NLyCWFa0q1uskCJK9KYKIE0FqWhxMyYtSoklIqT__M52SS8wYAKO8ZJUaknGPA5DoMxTwhfmBxN3taFbFNRYeHrvBbl3MTG--6pt1dkLPothkn3--YPM9nq-lDuXy8X0xvl6UXknelDOBrZDVqIZVyKlDvPQgZmVDg0TkdUaOLXtTO1EppJqjRAIZzE2lF-ZgsBm9o3cbuU_Pi0rttXWOPH21aW5e6xm_RalmjD45TE7SoXHBMevDBB2aU4RF619Xg2qf29Q1zZzftW9r151tmmIDKGFH1KTakfGpzThh_tlKwXy3boWXbt2yPLdtDD_EByn14t8b0q_6H-gTXkH6K</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Galal, Omar</creator><creator>Abdel-Gawad, Ahmed H.</creator><creator>Farouk, Mona</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><general>SpringerOpen</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>88J</scope><scope>8AL</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>M0C</scope><scope>M0N</scope><scope>M2R</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>DOA</scope></search><sort><creationdate>20241201</creationdate><title>Federated Freeze BERT for text classification</title><author>Galal, Omar ; Abdel-Gawad, Ahmed H. ; Farouk, Mona</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c453t-5d0cbe2be84566a6d1ccc045f2460ceaa8fe8eafc4ba9b66824198009339f1713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>BERT</topic><topic>Big Data</topic><topic>Classification</topic><topic>Communication</topic><topic>Communications Engineering</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data mining</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Database Management</topic><topic>Federated Learning (FL)</topic><topic>Freezing</topic><topic>Information Storage and Retrieval</topic><topic>Machine learning</topic><topic>Mathematical Applications in Computer Science</topic><topic>Mathematical models</topic><topic>Natural Language Processing (NLP)</topic><topic>Networks</topic><topic>Parameters</topic><topic>Pre-trained Language Models</topic><topic>Sentiment analysis</topic><topic>Text categorization</topic><topic>Text classification</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Galal, Omar</creatorcontrib><creatorcontrib>Abdel-Gawad, Ahmed H.</creatorcontrib><creatorcontrib>Farouk, Mona</creatorcontrib><collection>Springer Open Access</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection【Remote access available】</collection><collection>ProQuest Central (Corporate)</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Social Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Social Science Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>ProQuest Social Science Journals</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>DOAJ</collection><jtitle>Journal of big data</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Galal, Omar</au><au>Abdel-Gawad, Ahmed H.</au><au>Farouk, Mona</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Federated Freeze BERT for text classification</atitle><jtitle>Journal of big data</jtitle><stitle>J Big Data</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>11</volume><issue>1</issue><spage>28</spage><epage>16</epage><pages>28-16</pages><artnum>28</artnum><issn>2196-1115</issn><eissn>2196-1115</eissn><abstract>Pre-trained BERT models have demonstrated exceptional performance in the context of text classification tasks. Certain problem domains necessitate data distribution without data sharing. Federated Learning (FL) allows multiple clients to collectively train a global model by sharing learned models rather than raw data. However, the adoption of BERT, a large model, within a Federated Learning framework incurs substantial communication costs. To address this challenge, we propose a novel framework, FedFreezeBERT, for BERT-based text classification. FedFreezeBERT works by adding an aggregation architecture on top of BERT to obtain better sentence embedding for classification while freezing BERT parameters. Keeping the model parameters frozen, FedFreezeBERT reduces the communication costs by a large factor compared to other state-of-the-art methods. FedFreezeBERT is implemented in a distributed version where the aggregation architecture only is being transferred and aggregated by FL algorithms such as FedAvg or FedProx. FedFreezeBERT is also implemented in a centralized version where the data embeddings extracted by BERT are sent to the central server to train the aggregation architecture. The experiments show that FedFreezeBERT achieves new state-of-the-art performance on Arabic sentiment analysis on the ArSarcasm-v2 dataset with a 12.9% and 1.2% improvement over FedAvg/FedProx and the previous SOTA respectively. FedFreezeBERT also reduces the communication cost by 5 × compared to the previous SOTA.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><doi>10.1186/s40537-024-00885-x</doi><tpages>16</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2196-1115
ispartof Journal of big data, 2024-12, Vol.11 (1), p.28-16, Article 28
issn 2196-1115
2196-1115
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_85becda319d847ada25c0cdcd29693f0
source Publicly Available Content Database (Proquest) (PQ_SDU_P3); Social Science Premium Collection; ABI/INFORM Global; Springer Nature - SpringerLink Journals - Fully Open Access
subjects Algorithms
BERT
Big Data
Classification
Communication
Communications Engineering
Computational Science and Engineering
Computer Science
Data mining
Data Mining and Knowledge Discovery
Database Management
Federated Learning (FL)
Freezing
Information Storage and Retrieval
Machine learning
Mathematical Applications in Computer Science
Mathematical models
Natural Language Processing (NLP)
Networks
Parameters
Pre-trained Language Models
Sentiment analysis
Text categorization
Text classification
title Federated Freeze BERT for text classification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T15%3A23%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Federated%20Freeze%20BERT%20for%20text%20classification&rft.jtitle=Journal%20of%20big%20data&rft.au=Galal,%20Omar&rft.date=2024-12-01&rft.volume=11&rft.issue=1&rft.spage=28&rft.epage=16&rft.pages=28-16&rft.artnum=28&rft.issn=2196-1115&rft.eissn=2196-1115&rft_id=info:doi/10.1186/s40537-024-00885-x&rft_dat=%3Cproquest_doaj_%3E2924079947%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c453t-5d0cbe2be84566a6d1ccc045f2460ceaa8fe8eafc4ba9b66824198009339f1713%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2924079947&rft_id=info:pmid/&rfr_iscdi=true