DMS Home

DM Methodology

Clustering techniques

Clustering techniques fall into a group of undirected data mining tools. The goal of undirected data mining is to discover structure in the data as a whole. There is no target variable to be predicted, thus no distinction is being made between independent and dependent variables.

Clustering techniques are used for combining observed examples into clusters (groups) which satisfy two main criteria:

Depending on the clustering technique, clusters can be expressed in different ways:

We will explain here the basics of the simplest of clustering methods: k-means algorithm. There are many other methods, like self-organizing maps (Kohonen networks), or probabilistic clustering methods (AutoClass algorithm), which are more sophisticated and proficient, but k-means algorithm seemed a best choice for the illustration of the main principles.

K-means algorithm

This algorithm has as an input a predefined number of clusters, that is the k from its name. Means stands for an average, an average location of all the members of a particular cluster. When dealing with clustering techniques, one has to adopt a notion of a high dimensional space, or space in which ortogonal dimensions are all attributes from the table of data we are analyzing. The value of each attribute of an example represents a distance of the example from the origin along the attribute axes. Of course, in order to use this geometry efficiently, the values in the data set must all be numeric (categorical data must be transformed into numeric ones!) and should be normalized in order to allow fair computation of the overall distances in a multi-attribute space.

K-means algorithm is a simple, iterative procedure, in which a crucial concept is the one of centroid. Centroid is an artificial point in the space of records which represents an average location of the particular cluster. The coordinates of this point are averages of attribute values of all examples that belong to the cluster. The steps of the K-means algorithm are given in Figure 1.

____________________________________________________________________

  1. Select randomly k points (it can be also examples) to be the
    seeds for the centroids of k clusters.
  2. Assign each example to the centroid closest to the example,
    forming in this way k exclusive clusters of examples.
  3. Calculate new centroids of the clusters. For that purpose average
    all attribute values of the examples belonging to the same cluster (centroid).
  4. Check if the cluster centroids have changed their "coordinates".
    If yes, start again form the step 2). If not, cluster detection is
    finished and all examples have their cluster memberships defined.

____________________________________________________________________

Figure 1. K-means algorithm

Usually this iterative procedure of redefining centroids and reassigning the examples to clusters needs only a few iterations to converge.

 

Important issues in automatic cluster detection

Most of the issues related to automatic cluster detection are connected to the kinds of questions we want to be answered in the data mining project, or data preparation for their successful application.

Distance measure

Most clustering techniques use for the distance measure the Euclidean distance formula (square root of the sum of the squares of distances along each attribute axes).

Non-numeric variables must be transformed and scaled before the clustering can take place. Depending on this transformations, the categorical variables may dominate clustering results or they may be even completely ignored.

Choice of the right number of clusters

If the number of clusters k in the K-means method is not chosen so to match the natural structure of the data, the results will not be good. The proper way t alleviate this is to experiment with different values for k. In principle, the best k value will exhibit the smallest intra-cluster distances and largest inter-cluster distances. More sophisticated techniques measure this qualities automatically, and optimize the number of clusters in a separate loop (AutoClass).

Cluster interpretation

Once the clusters are discovered they have to be interpreted in order to have some value for the data mining project. There are different ways to utilize clustering results:

Application issues

Clustering techniques are used when we expect natural groupings in examples of the data. Clusters should then represent groups of items (products, events, customers) that have a lot in common. Creating clusters prior to application of some other data mining technique (decision trees, neural networks) might reduce the complexity of the problem by dividing the space of examples. This space partitions can be mined separately and such two step procedure might exhibit improved results (descriptive or predictive) as compared to data mining without using clustering.

 

Links to online tutorials on clustering

 

Data Clustering and Its Applications
by Raza Ali, Usman Ghani , Aasim Saeed
http://members.tripod.com/asim_saeed/paper.htm



© 2001 LIS - Rudjer Boskovic Institute
Last modified: February 01 2002 13:31:56.