Loading…
Accelerate Federated Learning of Big Data Analysis Through Distillation and Coupled Gradient Methods
Aiming at the problem that the training time of federated learning (FL) deep neural network is very long or not feasible, this paper presents an efficient federated learning and training framework. The framework consists of two parts: distillation and coupled gradient algorithm, which can reduce the...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Aiming at the problem that the training time of federated learning (FL) deep neural network is very long or not feasible, this paper presents an efficient federated learning and training framework. The framework consists of two parts: distillation and coupled gradient algorithm, which can reduce the computational cost, speed up the training process of federated learning and save computational resources. Simulation experiments were carried out using an open data set(mnist), and the results show that the framework can guarantee the learning performance and control the training cost well. |
---|---|
ISSN: | 2161-2927 |
DOI: | 10.23919/CCC55666.2022.9902345 |