16
votes

Has anyone tried to apply a smoother to the evaluation metric before applying the L-method to determine the number of k-means clusters in a dataset? If so, did it improve the results? Or allow a lower number of k-means trials and hence much greater increase in speed? Which smoothing algorithm/method did you use?

The "L-Method" is detailed in: Determining the Number of Clusters/Segments in Hierarchical Clustering/Segmentation Algorithms, Salvador & Chan

This calculates the evaluation metric for a range of different trial cluster counts. Then, to find the knee (which occurs for an optimum number of clusters), two lines are fitted using linear regression. A simple iterative process is applied to improve the knee fit - this uses the existing evaluation metric calculations and does not require any re-runs of the k-means.

For the evaluation metric, I am using a reciprocal of a simplified version of the Dunns Index. Simplified for speed (basically my diameter and inter-cluster calculations are simplified). The reciprocal is so that the index works in the correct direction (ie. lower is generally better).

K-means is a stochastic algorithm, so typically it is run multiple times and the best fit chosen. This works pretty well, but when you are doing this for 1..N clusters the time quickly adds up. So it is in my interest to keep the number of runs in check. Overall processing time may determine whether my implementation is practical or not - I may ditch this functionality if I cannot speed it up.

1
Thinking about this further, I don't think an even (ie. running average) smoother would have any notable effect, because the L-Method then fits lines using least squares. However, a shaped smoother such as a Gaussian might behave differently. I am going to try and implement a Gaussian of moderate size (half width of about 6-10 seems about right to me). It is going to be a qualitative test.winwaed
I do think this will be a good moderate sized research project. If there are any college students looking for a project, I'd be interested in collaborating/mentoring/co-authoring. Such a project should perform quantitative comparisons and be more general than my specific application. I'll add the project-ideas tag to the question.winwaed
What I have been doing, is skipping the calculation for some values of N. If we are interesting cluster counts from M to N, then I calculate to 2N, to give enough of a line on the right hand side. by dropping some of these high counts (eg. Only do alternates beyond a certain point), I get similar accuracy with some decent time savings. A couple of weeks ago, I also multithreaded the code which makes a big difference on a Core i7 :-)winwaed
I just calculate for alternate (eg. Odd values) values. I only do this at the high end where it is less critical.winwaed
There also is X-means, a k-means variant that starts with k=2, then iteratively splits clusters further.Has QUIT--Anony-Mousse

1 Answers

6
votes

I had asked a similar question in the past here on SO. My question was about coming up with a consistent way of finding the knee to the L-shape you described. The curves in question represented the trade-off between complexity and a fit measure of the model.

The best solution was to find the point with the maximum distance d according to the figure shown:

alt text

Note: I haven't read the paper you linked to yet..