Loading…
Scalable k-means for large-scale clustering
The k-means clustering is arguably the most popular clustering technique, which has been applied to a wide range of applications. Lloyd’s algorithm is the most popular algorithm for the k-means problem due to its simplicity, geometric intuition and effectiveness. However, in a naive implementation o...
Saved in:
Published in: | Intelligent data analysis 2019-01, Vol.23 (4), p.825-838 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The k-means clustering is arguably the most popular clustering technique, which has been applied to a wide range of applications. Lloyd’s algorithm is the most popular algorithm for the k-means problem due to its simplicity, geometric intuition and effectiveness. However, in a naive implementation of Lloyd’s algorithm, we need to compute the Euclidean distances between all data points and all cluster centers in each iteration. This prevents the algorithm from being scalable to large datasets and becomes the main bottleneck. To overcome the problem, this paper proposes two scalable k-means algorithms, Scalable Lloyd’s k-means and Scalable Mini-Batch k-means. They are distributed extensions of Lloyd’s algorithm and the mini-batch k-means, respectively. The two algorithms are all use the data-parallel technique to scale beyond computational and memory limits of a single machine. Meanwhile, they are all based on the parameter server abstraction that facilitates the data-parallel computation. The first algorithm can find better quality of solutions, while the second one converges to a modest solution faster. They both have good scalability and totally do in-memory computation. In addition, we propose a new aggregation method for Scalable Mini-Batch k-means. Extensive experiments conducted on four large-scale datasets show that our proposed algorithms have good convergence performance and achieve almost ideal speedup. |
---|---|
ISSN: | 1088-467X 1571-4128 |
DOI: | 10.3233/IDA-173795 |