Clustering analysis is one of the most commonly used data processing algorithms. Over half a century, K-means remains the most popular clustering algorithm because of its simplicity. Traditional K-means clustering tries to assign n data objects to k clusters starting with random initial centers. However, most of the k- means variants tend to compute distance of each data point to each cluster centroid for every iteration. We propose a fast heuristic to overcome this bottleneck with only marginal increase in Mean Squared Error (MSE). We observe that across all iterations of K-means, a data point changes its membership only among a small subset of clusters. Our heuristic predicts such clusters for each data point by looking at nearby clusters after the first iteration of k-means. We augment well-known variants of k- means like Enhanced K-means and K-means with Triangle Inequality using our heuristic to demonstrate its effectiveness. For various datasets, our heuristic achieves speed-up of up-to 3 times when compared to efficient variants of k-means.
C. Raghavendra, pursuing Ph.D in Computer Science & Engineering from Bharath University, Chennai. Presently, he is working as Asst. Professor, CSE Dept., Institute of Aeronautical Engineering, Hyderabad. His research interests are Image processing & Security, Big Data.
Rajendra Prasad Kypa
Reuben Bernard Francis
Number of Pages:
LAP LAMBERT Academic Publishing
Big Data, K-means
COMPUTERS / General