Loading…

Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach

In this letter, we consider the concept of mobile crowd-machine learning (MCML) for a federated learning model. The MCML enables mobile devices in a mobile network to collaboratively train neural network models required by a server while keeping data on the mobile devices. The MCML thus addresses da...

Full description

Saved in:
Bibliographic Details
Published in:IEEE wireless communications letters 2019-10, Vol.8 (5), p.1345-1348
Main Authors: Anh, Tran The, Luong, Nguyen Cong, Niyato, Dusit, Kim, Dong In, Wang, Li-Chun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c357t-69580999f319b8b6224b2fafff02f7e4efea76471b6d8e0d9ba3769409d596cc3
cites cdi_FETCH-LOGICAL-c357t-69580999f319b8b6224b2fafff02f7e4efea76471b6d8e0d9ba3769409d596cc3
container_end_page 1348
container_issue 5
container_start_page 1345
container_title IEEE wireless communications letters
container_volume 8
creator Anh, Tran The
Luong, Nguyen Cong
Niyato, Dusit
Kim, Dong In
Wang, Li-Chun
description In this letter, we consider the concept of mobile crowd-machine learning (MCML) for a federated learning model. The MCML enables mobile devices in a mobile network to collaboratively train neural network models required by a server while keeping data on the mobile devices. The MCML thus addresses data privacy issues of traditional machine learning. However, the mobile devices are constrained by energy, CPU, and wireless bandwidth. Thus, to minimize the energy consumption, training time, and communication cost, the server needs to determine proper amounts of data and energy that the mobile devices use for training. However, under the dynamics and uncertainty of the mobile environment, it is challenging for the server to determine the optimal decisions on mobile device resource management. In this letter, we propose to adopt a deep Q -learning algorithm that allows the server to learn and find optimal decisions without any a priori knowledge of network dynamics. Simulation results show that the proposed algorithm outperforms the static algorithms in terms of energy consumption and training latency.
doi_str_mv 10.1109/LWC.2019.2917133
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_LWC_2019_2917133</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8716527</ieee_id><sourcerecordid>2305632353</sourcerecordid><originalsourceid>FETCH-LOGICAL-c357t-69580999f319b8b6224b2fafff02f7e4efea76471b6d8e0d9ba3769409d596cc3</originalsourceid><addsrcrecordid>eNo9kM1Lw0AQxRdRsNTeBS8LnlP3I9nNeiuxfkCKIBWPyyaZrVvaTdy0iP-9G6KdywwzvzcPHkLXlMwpJequ_CjmjFA1Z4pKyvkZmjAqWMJ4mp2fZi4v0azvtySWIJTRfII2S2td7cAf8DoY553f4JXxZgP7YWfbgFdt5XaAi9B-N8nK1J_OAy7BhAG-xwv8ANDhN3A-0vWo-z_jRdeFNmqu0IU1ux5mf32K3h-X6-I5KV-fXopFmdQ8k4dEqCwnSinLqarySjCWVswaay1hVkIKFowUqaSVaHIgjaoMl0KlRDWZEnXNp-h2_Bttv47QH_S2PQYfLTXjJBOc8YxHioxUHdq-D2B1F9zehB9NiR4S1TFRPSSq_xKNkptR4gDghOeSioxJ_gvf5nE5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2305632353</pqid></control><display><type>article</type><title>Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach</title><source>IEEE Xplore (Online service)</source><creator>Anh, Tran The ; Luong, Nguyen Cong ; Niyato, Dusit ; Kim, Dong In ; Wang, Li-Chun</creator><creatorcontrib>Anh, Tran The ; Luong, Nguyen Cong ; Niyato, Dusit ; Kim, Dong In ; Wang, Li-Chun</creatorcontrib><description>In this letter, we consider the concept of mobile crowd-machine learning (MCML) for a federated learning model. The MCML enables mobile devices in a mobile network to collaboratively train neural network models required by a server while keeping data on the mobile devices. The MCML thus addresses data privacy issues of traditional machine learning. However, the mobile devices are constrained by energy, CPU, and wireless bandwidth. Thus, to minimize the energy consumption, training time, and communication cost, the server needs to determine proper amounts of data and energy that the mobile devices use for training. However, under the dynamics and uncertainty of the mobile environment, it is challenging for the server to determine the optimal decisions on mobile device resource management. In this letter, we propose to adopt a deep &lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;Q &lt;/tex-math&gt;&lt;/inline-formula&gt;-learning algorithm that allows the server to learn and find optimal decisions without any a priori knowledge of network dynamics. Simulation results show that the proposed algorithm outperforms the static algorithms in terms of energy consumption and training latency.</description><identifier>ISSN: 2162-2337</identifier><identifier>EISSN: 2162-2345</identifier><identifier>DOI: 10.1109/LWC.2019.2917133</identifier><identifier>CODEN: IWCLAF</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Artificial intelligence ; Bandwidths ; Computer simulation ; Data models ; Decisions ; deep reinforcement learning ; Electronic devices ; Energy conservation ; Energy consumption ; federated learning ; Heuristic algorithms ; Machine learning ; Mobile communication systems ; Mobile crowd ; Mobile handsets ; Neural networks ; Resource management ; Servers ; Training ; Wireless networks</subject><ispartof>IEEE wireless communications letters, 2019-10, Vol.8 (5), p.1345-1348</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c357t-69580999f319b8b6224b2fafff02f7e4efea76471b6d8e0d9ba3769409d596cc3</citedby><cites>FETCH-LOGICAL-c357t-69580999f319b8b6224b2fafff02f7e4efea76471b6d8e0d9ba3769409d596cc3</cites><orcidid>0000-0001-7711-8072 ; 0000-0002-7883-6217 ; 0000-0002-6973-4497 ; 0000-0002-7442-7416</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8716527$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Anh, Tran The</creatorcontrib><creatorcontrib>Luong, Nguyen Cong</creatorcontrib><creatorcontrib>Niyato, Dusit</creatorcontrib><creatorcontrib>Kim, Dong In</creatorcontrib><creatorcontrib>Wang, Li-Chun</creatorcontrib><title>Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach</title><title>IEEE wireless communications letters</title><addtitle>LWC</addtitle><description>In this letter, we consider the concept of mobile crowd-machine learning (MCML) for a federated learning model. The MCML enables mobile devices in a mobile network to collaboratively train neural network models required by a server while keeping data on the mobile devices. The MCML thus addresses data privacy issues of traditional machine learning. However, the mobile devices are constrained by energy, CPU, and wireless bandwidth. Thus, to minimize the energy consumption, training time, and communication cost, the server needs to determine proper amounts of data and energy that the mobile devices use for training. However, under the dynamics and uncertainty of the mobile environment, it is challenging for the server to determine the optimal decisions on mobile device resource management. In this letter, we propose to adopt a deep &lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;Q &lt;/tex-math&gt;&lt;/inline-formula&gt;-learning algorithm that allows the server to learn and find optimal decisions without any a priori knowledge of network dynamics. Simulation results show that the proposed algorithm outperforms the static algorithms in terms of energy consumption and training latency.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Bandwidths</subject><subject>Computer simulation</subject><subject>Data models</subject><subject>Decisions</subject><subject>deep reinforcement learning</subject><subject>Electronic devices</subject><subject>Energy conservation</subject><subject>Energy consumption</subject><subject>federated learning</subject><subject>Heuristic algorithms</subject><subject>Machine learning</subject><subject>Mobile communication systems</subject><subject>Mobile crowd</subject><subject>Mobile handsets</subject><subject>Neural networks</subject><subject>Resource management</subject><subject>Servers</subject><subject>Training</subject><subject>Wireless networks</subject><issn>2162-2337</issn><issn>2162-2345</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNo9kM1Lw0AQxRdRsNTeBS8LnlP3I9nNeiuxfkCKIBWPyyaZrVvaTdy0iP-9G6KdywwzvzcPHkLXlMwpJequ_CjmjFA1Z4pKyvkZmjAqWMJ4mp2fZi4v0azvtySWIJTRfII2S2td7cAf8DoY553f4JXxZgP7YWfbgFdt5XaAi9B-N8nK1J_OAy7BhAG-xwv8ANDhN3A-0vWo-z_jRdeFNmqu0IU1ux5mf32K3h-X6-I5KV-fXopFmdQ8k4dEqCwnSinLqarySjCWVswaay1hVkIKFowUqaSVaHIgjaoMl0KlRDWZEnXNp-h2_Bttv47QH_S2PQYfLTXjJBOc8YxHioxUHdq-D2B1F9zehB9NiR4S1TFRPSSq_xKNkptR4gDghOeSioxJ_gvf5nE5</recordid><startdate>20191001</startdate><enddate>20191001</enddate><creator>Anh, Tran The</creator><creator>Luong, Nguyen Cong</creator><creator>Niyato, Dusit</creator><creator>Kim, Dong In</creator><creator>Wang, Li-Chun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-7711-8072</orcidid><orcidid>https://orcid.org/0000-0002-7883-6217</orcidid><orcidid>https://orcid.org/0000-0002-6973-4497</orcidid><orcidid>https://orcid.org/0000-0002-7442-7416</orcidid></search><sort><creationdate>20191001</creationdate><title>Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach</title><author>Anh, Tran The ; Luong, Nguyen Cong ; Niyato, Dusit ; Kim, Dong In ; Wang, Li-Chun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c357t-69580999f319b8b6224b2fafff02f7e4efea76471b6d8e0d9ba3769409d596cc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Bandwidths</topic><topic>Computer simulation</topic><topic>Data models</topic><topic>Decisions</topic><topic>deep reinforcement learning</topic><topic>Electronic devices</topic><topic>Energy conservation</topic><topic>Energy consumption</topic><topic>federated learning</topic><topic>Heuristic algorithms</topic><topic>Machine learning</topic><topic>Mobile communication systems</topic><topic>Mobile crowd</topic><topic>Mobile handsets</topic><topic>Neural networks</topic><topic>Resource management</topic><topic>Servers</topic><topic>Training</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Anh, Tran The</creatorcontrib><creatorcontrib>Luong, Nguyen Cong</creatorcontrib><creatorcontrib>Niyato, Dusit</creatorcontrib><creatorcontrib>Kim, Dong In</creatorcontrib><creatorcontrib>Wang, Li-Chun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE wireless communications letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Anh, Tran The</au><au>Luong, Nguyen Cong</au><au>Niyato, Dusit</au><au>Kim, Dong In</au><au>Wang, Li-Chun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach</atitle><jtitle>IEEE wireless communications letters</jtitle><stitle>LWC</stitle><date>2019-10-01</date><risdate>2019</risdate><volume>8</volume><issue>5</issue><spage>1345</spage><epage>1348</epage><pages>1345-1348</pages><issn>2162-2337</issn><eissn>2162-2345</eissn><coden>IWCLAF</coden><abstract>In this letter, we consider the concept of mobile crowd-machine learning (MCML) for a federated learning model. The MCML enables mobile devices in a mobile network to collaboratively train neural network models required by a server while keeping data on the mobile devices. The MCML thus addresses data privacy issues of traditional machine learning. However, the mobile devices are constrained by energy, CPU, and wireless bandwidth. Thus, to minimize the energy consumption, training time, and communication cost, the server needs to determine proper amounts of data and energy that the mobile devices use for training. However, under the dynamics and uncertainty of the mobile environment, it is challenging for the server to determine the optimal decisions on mobile device resource management. In this letter, we propose to adopt a deep &lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;Q &lt;/tex-math&gt;&lt;/inline-formula&gt;-learning algorithm that allows the server to learn and find optimal decisions without any a priori knowledge of network dynamics. Simulation results show that the proposed algorithm outperforms the static algorithms in terms of energy consumption and training latency.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LWC.2019.2917133</doi><tpages>4</tpages><orcidid>https://orcid.org/0000-0001-7711-8072</orcidid><orcidid>https://orcid.org/0000-0002-7883-6217</orcidid><orcidid>https://orcid.org/0000-0002-6973-4497</orcidid><orcidid>https://orcid.org/0000-0002-7442-7416</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2162-2337
ispartof IEEE wireless communications letters, 2019-10, Vol.8 (5), p.1345-1348
issn 2162-2337
2162-2345
language eng
recordid cdi_crossref_primary_10_1109_LWC_2019_2917133
source IEEE Xplore (Online service)
subjects Algorithms
Artificial intelligence
Bandwidths
Computer simulation
Data models
Decisions
deep reinforcement learning
Electronic devices
Energy conservation
Energy consumption
federated learning
Heuristic algorithms
Machine learning
Mobile communication systems
Mobile crowd
Mobile handsets
Neural networks
Resource management
Servers
Training
Wireless networks
title Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T22%3A08%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Efficient%20Training%20Management%20for%20Mobile%20Crowd-Machine%20Learning:%20A%20Deep%20Reinforcement%20Learning%20Approach&rft.jtitle=IEEE%20wireless%20communications%20letters&rft.au=Anh,%20Tran%20The&rft.date=2019-10-01&rft.volume=8&rft.issue=5&rft.spage=1345&rft.epage=1348&rft.pages=1345-1348&rft.issn=2162-2337&rft.eissn=2162-2345&rft.coden=IWCLAF&rft_id=info:doi/10.1109/LWC.2019.2917133&rft_dat=%3Cproquest_cross%3E2305632353%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c357t-69580999f319b8b6224b2fafff02f7e4efea76471b6d8e0d9ba3769409d596cc3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2305632353&rft_id=info:pmid/&rft_ieee_id=8716527&rfr_iscdi=true