1
votes

The situation is as follows:

I have a dataset of documents which I've manually assigned to a (ground) cluster based on their subject. I've then used Hierarchical Agglomerative Clustering (HAC) to automatically cluster that same dataset. I'm now trying to evaluate the HAC clusters using the pair counting f-measure (as described in Characterization and evaluation of similarity measures for pairs of clusterings by Darius Pfitzner, Richard Leibbrandt & David Powers).

The problem I'm facing however is that my manual clustering produced flat clusters (so no relation between the clusters what so ever), while the clusters found by HAC are hierarchical. So when looking at the dendrogram, based on the depth (horizontal line) you chose you have a different number of clusters (At depth 0 (the root node) you only have 1 cluster; at MAX depth, your number of clusters equals the number of elements in your dataset).

So, my questions now are:

  • Do I need to select a depth (so that I have a fixed set of clusters) in order to use the pair counting f-measure (or am I missing something?).
  • If so, what criteria do I use to determine this depth?
1

1 Answers

2
votes

The pair-counting measures are designed for overlap-free flat paritionings.

If you attempt to compute them for overlapping or hierarchical results, you will easily get values outside the [0;1] range; so the methods clearly are not working.

So yes, you have to cut the tree somehow (e.g. at a specific height; or in order to achieve a particular number of clusters) in order to be able to use this evaluation measure.

A recent suggestion on how to extract a flat parititioning out of a hierarchical clustering result (whether that is from linkage clustering, OPTICS or HDBSCAN) can be found here:

A framework for semi-supervised and unsupervised optimal extraction of clusters from hierarchies
R. J. G. B. Campello, D. Moulavi, A. Zimek, J. Sander
Data Mining and Knowledge Discovery, 27(3): 344–371, 2013.

but I have not used this yet. It sounds very useful though, and is on my to-read list.