Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

I am reading about the difference between k-means clustering and k-medoid clustering.

Supposedly there is an advantage to using the pairwise distance measure in the k-medoid algorithm, instead of the more familiar sum of squared Euclidean distance-type metric to evaluate variance that we find with k-means. And apparently, this different distance metric somehow reduces noise and outliers.

I have seen this claim but I have yet to see any good reasoning as to the mathematics behind this claim.

What makes the pairwise distance measure commonly used in k-medoid better? More exactly, how does the lack of a squared term allow k-medoids to have the desirable properties associated with the concept of taking a median?

1 Answer

0 votes
by (33.1k points)

K-means is the most common algorithm that uses an iterative refinement technique. But k-medoids minimizes the sum of dissimilarities between points labeled to be in a cluster and a point designated as the center of that cluster. 

1. K-medoid is more flexible

You can use k-medoids with any similarity measure. but K-means may fail to converge because it must be used with distances that are consistent with the mean only. So e.g. Absolute Pearson Correlation can’t be used with k-means, but it works well with k-medoids.

2. Robustness of medoid

The medoid used by k-medoids is roughly comparable to the median. The median is more robust to outliers than the arithmetic mean. It is a more robust estimate of a representative point than the mean as used in k-means.

For example:

[1, 2, 3, 4, 100000]

In the above example, both the median and medoid of this set are 3. The mean is 20002.

The medoid is representing the most of data set, but the mean is getting deflected by outliers here. The median has a breakdown point of 50%, the mean has a breakdown point of 0.

3. k-medoids is much more expensive

K means is quite fast and less expensive than k medoid. K medoid computes all the pairwise distances, it is O(n^2*k*i), k-means runs in O(n*k*i), k times the number of iterations is k*i << n.

Hope this answer helps.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...