Loading…

DMP-ELMs: Data and model parallel extreme learning machines for large-scale learning tasks

As machine learning applications embrace larger data size and model complexity, practitioners turn to distributed clusters to satisfy the increasing computational and memory demands. Recently, several parallel variants of extreme learning machine (ELM) have been proposed, some of which are based on...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2018-12, Vol.320, p.85-97
Main Authors: Ming, Yuewei, Zhu, En, Wang, Mao, Ye, Yongkai, Liu, Xinwang, Yin, Jianping
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As machine learning applications embrace larger data size and model complexity, practitioners turn to distributed clusters to satisfy the increasing computational and memory demands. Recently, several parallel variants of extreme learning machine (ELM) have been proposed, some of which are based on clusters. However, the limitation of computation and memory in these variants is still not well addressed when both the data and model are very large. Our goal is to build scalable ELMs with a large number of samples and hidden neurons, parallel running on clusters without computational and memory bottlenecks while having the same output results as the sequential ELM. In this paper, we propose two parallel variants of ELM, referred to as local data and model parallel ELM (LDMP-ELM) and global data and model parallel ELM (GDMP-ELM). Both variants are implemented on clusters with Message Passing Interface (MPI) environment. They both make a tradeoff between efficiency and scalability and have complementary advantages. Collectively, these two variants are called as data and model parallel ELMs (DMP-ELMs). The advantages of DMP-ELMs over existing variants are highlighted as follows: (1) They simultaneously utilize data and model parallel techniques to improve the parallelism of ELM. (2) They have better scalability to support larger data and models due to that they have addressed the memory and computational bottlenecks appearing in existing variants. Extensive experiments conducted on four large-scale datasets show that our proposed algorithms have good scalability and achieve almost ideal speedup. To the best of our knowledge, it is the first time to successfully train a large ELM model with 50,000 hidden neurons on the mnist8m dataset with 8.1 million samples and 784 features.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2018.08.062