Loading…

Greedy‐based user selection for federated graph neural networks with limited communication resources

Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph‐structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation,...

Full description

Saved in:
Bibliographic Details
Published in:Computational intelligence 2024-02, Vol.40 (1), p.n/a
Main Authors: Huangfu, Hancong, Zhang, Zizhen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c2607-bb863a087caa19a6c83d7a6cc893dd86f2a8fe61dcbe137d0aa04f8616403eea3
container_end_page n/a
container_issue 1
container_start_page
container_title Computational intelligence
container_volume 40
creator Huangfu, Hancong
Zhang, Zizhen
description Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph‐structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation, which arises from the frequent exchange of graphic data during model training, especially in limited communication scenarios. To address this issue, we propose a novel framework of federated graph neural networks, where multiple mobile users collaboratively train the global model of graph neural networks in a federated way. The utilization of federated learning into the training of graph neural networks can help reduce the communication overhead of the system and protect the data privacy of local users. In addition, the federated training can help reduce the system computational complexity significantly. We further introduce a greedy‐based user selection for the federated graph neural networks, where the wireless bandwidth is dynamically allocated among users to encourage more users to attend the federated training of neural networks. We perform the convergence analysis on the federated training of neural networks, in order to obtain some more insights on the impact of critical parameters on the system design. Finally, we perform the simulations on the coriolis ocean for reAnalysis (CORA) dataset and show the advantages of the proposed method in this paper.
doi_str_mv 10.1111/coin.12637
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2930966827</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2930966827</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2607-bb863a087caa19a6c83d7a6cc893dd86f2a8fe61dcbe137d0aa04f8616403eea3</originalsourceid><addsrcrecordid>eNp9kL1OwzAUhS0EEqWw8ASR2JBS7DjYzogqKJUqusBsOfY1dUnjYiequvEIPCNPgtswc5cz3O_cn4PQNcETkupOe9dOSMEoP0EjUjKeC1biUzTCoihzXtH7c3QR4xpjTGgpRsjOAoDZ_3x91yqCyfoIIYvQgO6cbzPrQ2bBQFBdar4HtV1lLfRBNUm6nQ8fMdu5bpU1buMOiPabTd86rY72ANH3QUO8RGdWNRGu_nSM3p4eX6fP-WI5m08fFrkuGOZ5XQtGFRZcK0UqxbSghifRoqLGCGYLJSwwYnQNhHKDlcKlFYykJymAomN0M8zdBv_ZQ-zkOh3QppWyqCiuGBMFT9TtQOngYwxg5Ta4jQp7SbA85CgPOcpjjgkmA7xzDez_IeV0OX8ZPL_YRXkc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2930966827</pqid></control><display><type>article</type><title>Greedy‐based user selection for federated graph neural networks with limited communication resources</title><source>Wiley-Blackwell Read &amp; Publish Collection</source><creator>Huangfu, Hancong ; Zhang, Zizhen</creator><creatorcontrib>Huangfu, Hancong ; Zhang, Zizhen</creatorcontrib><description>Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph‐structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation, which arises from the frequent exchange of graphic data during model training, especially in limited communication scenarios. To address this issue, we propose a novel framework of federated graph neural networks, where multiple mobile users collaboratively train the global model of graph neural networks in a federated way. The utilization of federated learning into the training of graph neural networks can help reduce the communication overhead of the system and protect the data privacy of local users. In addition, the federated training can help reduce the system computational complexity significantly. We further introduce a greedy‐based user selection for the federated graph neural networks, where the wireless bandwidth is dynamically allocated among users to encourage more users to attend the federated training of neural networks. We perform the convergence analysis on the federated training of neural networks, in order to obtain some more insights on the impact of critical parameters on the system design. Finally, we perform the simulations on the coriolis ocean for reAnalysis (CORA) dataset and show the advantages of the proposed method in this paper.</description><identifier>ISSN: 0824-7935</identifier><identifier>EISSN: 1467-8640</identifier><identifier>DOI: 10.1111/coin.12637</identifier><language>eng</language><publisher>Hoboken: Blackwell Publishing Ltd</publisher><subject>Communication ; Complexity ; convergence analysis ; federated learning ; Graph neural networks ; limited communication resources ; Machine learning ; Neural networks ; Structured data ; Systems design ; Training ; Wireless networks</subject><ispartof>Computational intelligence, 2024-02, Vol.40 (1), p.n/a</ispartof><rights>2024 Wiley Periodicals LLC.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2607-bb863a087caa19a6c83d7a6cc893dd86f2a8fe61dcbe137d0aa04f8616403eea3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Huangfu, Hancong</creatorcontrib><creatorcontrib>Zhang, Zizhen</creatorcontrib><title>Greedy‐based user selection for federated graph neural networks with limited communication resources</title><title>Computational intelligence</title><description>Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph‐structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation, which arises from the frequent exchange of graphic data during model training, especially in limited communication scenarios. To address this issue, we propose a novel framework of federated graph neural networks, where multiple mobile users collaboratively train the global model of graph neural networks in a federated way. The utilization of federated learning into the training of graph neural networks can help reduce the communication overhead of the system and protect the data privacy of local users. In addition, the federated training can help reduce the system computational complexity significantly. We further introduce a greedy‐based user selection for the federated graph neural networks, where the wireless bandwidth is dynamically allocated among users to encourage more users to attend the federated training of neural networks. We perform the convergence analysis on the federated training of neural networks, in order to obtain some more insights on the impact of critical parameters on the system design. Finally, we perform the simulations on the coriolis ocean for reAnalysis (CORA) dataset and show the advantages of the proposed method in this paper.</description><subject>Communication</subject><subject>Complexity</subject><subject>convergence analysis</subject><subject>federated learning</subject><subject>Graph neural networks</subject><subject>limited communication resources</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Structured data</subject><subject>Systems design</subject><subject>Training</subject><subject>Wireless networks</subject><issn>0824-7935</issn><issn>1467-8640</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kL1OwzAUhS0EEqWw8ASR2JBS7DjYzogqKJUqusBsOfY1dUnjYiequvEIPCNPgtswc5cz3O_cn4PQNcETkupOe9dOSMEoP0EjUjKeC1biUzTCoihzXtH7c3QR4xpjTGgpRsjOAoDZ_3x91yqCyfoIIYvQgO6cbzPrQ2bBQFBdar4HtV1lLfRBNUm6nQ8fMdu5bpU1buMOiPabTd86rY72ANH3QUO8RGdWNRGu_nSM3p4eX6fP-WI5m08fFrkuGOZ5XQtGFRZcK0UqxbSghifRoqLGCGYLJSwwYnQNhHKDlcKlFYykJymAomN0M8zdBv_ZQ-zkOh3QppWyqCiuGBMFT9TtQOngYwxg5Ta4jQp7SbA85CgPOcpjjgkmA7xzDez_IeV0OX8ZPL_YRXkc</recordid><startdate>202402</startdate><enddate>202402</enddate><creator>Huangfu, Hancong</creator><creator>Zhang, Zizhen</creator><general>Blackwell Publishing Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202402</creationdate><title>Greedy‐based user selection for federated graph neural networks with limited communication resources</title><author>Huangfu, Hancong ; Zhang, Zizhen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2607-bb863a087caa19a6c83d7a6cc893dd86f2a8fe61dcbe137d0aa04f8616403eea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Communication</topic><topic>Complexity</topic><topic>convergence analysis</topic><topic>federated learning</topic><topic>Graph neural networks</topic><topic>limited communication resources</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Structured data</topic><topic>Systems design</topic><topic>Training</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Huangfu, Hancong</creatorcontrib><creatorcontrib>Zhang, Zizhen</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computational intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huangfu, Hancong</au><au>Zhang, Zizhen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Greedy‐based user selection for federated graph neural networks with limited communication resources</atitle><jtitle>Computational intelligence</jtitle><date>2024-02</date><risdate>2024</risdate><volume>40</volume><issue>1</issue><epage>n/a</epage><issn>0824-7935</issn><eissn>1467-8640</eissn><abstract>Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph‐structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation, which arises from the frequent exchange of graphic data during model training, especially in limited communication scenarios. To address this issue, we propose a novel framework of federated graph neural networks, where multiple mobile users collaboratively train the global model of graph neural networks in a federated way. The utilization of federated learning into the training of graph neural networks can help reduce the communication overhead of the system and protect the data privacy of local users. In addition, the federated training can help reduce the system computational complexity significantly. We further introduce a greedy‐based user selection for the federated graph neural networks, where the wireless bandwidth is dynamically allocated among users to encourage more users to attend the federated training of neural networks. We perform the convergence analysis on the federated training of neural networks, in order to obtain some more insights on the impact of critical parameters on the system design. Finally, we perform the simulations on the coriolis ocean for reAnalysis (CORA) dataset and show the advantages of the proposed method in this paper.</abstract><cop>Hoboken</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/coin.12637</doi><tpages>17</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0824-7935
ispartof Computational intelligence, 2024-02, Vol.40 (1), p.n/a
issn 0824-7935
1467-8640
language eng
recordid cdi_proquest_journals_2930966827
source Wiley-Blackwell Read & Publish Collection
subjects Communication
Complexity
convergence analysis
federated learning
Graph neural networks
limited communication resources
Machine learning
Neural networks
Structured data
Systems design
Training
Wireless networks
title Greedy‐based user selection for federated graph neural networks with limited communication resources
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T11%3A32%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Greedy%E2%80%90based%20user%20selection%20for%20federated%20graph%20neural%20networks%20with%20limited%20communication%20resources&rft.jtitle=Computational%20intelligence&rft.au=Huangfu,%20Hancong&rft.date=2024-02&rft.volume=40&rft.issue=1&rft.epage=n/a&rft.issn=0824-7935&rft.eissn=1467-8640&rft_id=info:doi/10.1111/coin.12637&rft_dat=%3Cproquest_cross%3E2930966827%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c2607-bb863a087caa19a6c83d7a6cc893dd86f2a8fe61dcbe137d0aa04f8616403eea3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2930966827&rft_id=info:pmid/&rfr_iscdi=true