Loading…
FP-DCNN: a parallel optimization algorithm for deep convolutional neural network
Deep convolutional neural networks (DCNNs) have been successfully used in many computer visions task. However, with the increasing complexity of the network and continuous growth of data scale, training a DCNN model suffers from the following three problems: excessive network parameters, insufficien...
Saved in:
Published in: | The Journal of supercomputing 2022-02, Vol.78 (3), p.3791-3813 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep convolutional neural networks (DCNNs) have been successfully used in many computer visions task. However, with the increasing complexity of the network and continuous growth of data scale, training a DCNN model suffers from the following three problems: excessive network parameters, insufficient capability of the parameter optimization, and inefficient parallelism. To overcome these obstacles, this paper develops an optimization algorithm for deep convolutional neural network (FP-DCNN) in the MapReduce framework. First, a pruning method based on Taylor’s loss (FMPTL) is designed to trim redundant parameters, which not only compresses the structure of DCNN, but also reduces the computational cost of training. Next, a glowworm swarm optimization algorithm based on information sharing strategy (IFAS) is presented, which improves the ability of parameter optimization by adjusting the initialization of weights. Finally, a dynamic load balancing strategy based on parallel computing entropy (DLBPCE) is proposed to achieve an even distribution of data and thus improve the parallel performance of the cluster. Our experiments show that compared with other parallelized algorithms, this algorithm not only reduces the computational cost of network training, but also obtains a higher processing speed. |
---|---|
ISSN: | 0920-8542 1573-0484 |
DOI: | 10.1007/s11227-021-04012-y |