Class KMeans

java.lang.Object
smile.clustering.KMeans

public class KMeans extends Object
K-Means clustering. The algorithm partitions n observations into k clusters in which each observation belongs to the cluster with the nearest mean. Although finding an exact solution to the k-means problem for arbitrary input is NP-hard, the standard approach to finding an approximate solution (often called Lloyd's algorithm or the k-means algorithm) is used widely and frequently finds reasonable solutions quickly.

K-means has a number of interesting theoretical properties. First, it partitions the data space into a structure known as a Voronoi diagram. Second, it is conceptually close to nearest neighbor classification, and as such is popular in machine learning. Third, it can be seen as a variation of model based clustering, and Lloyd's algorithm as a variation of the EM algorithm.

However, the k-means algorithm has at least two major theoretic shortcomings:

  • First, it has been shown that the worst case running time of the algorithm is super-polynomial in the input size.
  • Second, the approximation found can be arbitrarily bad with respect to the objective function compared to the optimal learn. Therefore, it is common to run multiple times with different random initializations.
In this implementation, we use k-means++ which addresses the second of these obstacles by specifying a procedure to initialize the cluster centers before proceeding with the standard k-means optimization iterations. With the k-means++ initialization, the algorithm is guaranteed to find a solution that is O(log k) competitive to the optimal k-means solution.

We also use k-d trees to speed up each k-means step as described in the filter algorithm by Kanungo, et al.

K-means is a hard clustering method, i.e. each observation is assigned to a specific cluster. In contrast, soft clustering, e.g. the Expectation-Maximization algorithm for Gaussian mixtures, assign observations to different clusters with different probabilities.

References

  1. Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman, and Angela Y. Wu. An Efficient k-Means Clustering Algorithm: Analysis and Implementation. IEEE TRANS. PAMI, 2002.
  2. D. Arthur and S. Vassilvitskii. "K-means++: the advantages of careful seeding". ACM-SIAM symposium on Discrete algorithms, 1027-1035, 2007.
  3. Anna D. Peterson, Arka P. Ghosh and Ranjan Maitra. A systematic evaluation of different methods for initializing the K-means clustering algorithm. 2010.
See Also:
  • Method Details

    • fit

      public static CentroidClustering<double[],double[]> fit(double[][] data, int k, int maxIter)
      Fits k-means clustering.
      Parameters:
      data - the input data of which each row is an observation.
      k - the number of clusters.
      maxIter - the maximum number of iterations.
      Returns:
      the model.
    • fit

      public static CentroidClustering<double[],double[]> fit(double[][] data, Clustering.Options options)
      Fits k-means clustering.
      Parameters:
      data - the input data of which each row is an observation.
      options - the hyperparameters.
      Returns:
      the model.
    • fit

      public static CentroidClustering<double[],double[]> fit(BBDTree bbd, double[][] data, Clustering.Options options)
      Partitions data into k clusters.
      Parameters:
      bbd - the BBD-tree of data for fast clustering.
      data - the input data of which each row is an observation.
      options - the hyperparameters.
      Returns:
      the model.
    • lloyd

      public static CentroidClustering<double[],double[]> lloyd(double[][] data, int k, int maxIter)
      Fits k-means clustering on the data containing missing values (NaN).
      Parameters:
      data - the input data of which each row is an observation.
      k - the number of clusters.
      maxIter - the maximum number of iterations.
      Returns:
      the model.
    • lloyd

      public static CentroidClustering<double[],double[]> lloyd(double[][] data, Clustering.Options options)
      Fits k-means clustering on the data containing missing values (NaN).
      Parameters:
      data - the input data of which each row is an observation.
      options - the hyperparameters.
      Returns:
      the model.