K-means is the most common algorithm that uses an iterative refinement technique. But k-medoids minimizes the sum of dissimilarities between points labeled to be in a cluster and a point designated as the center of that cluster.
1. K-medoid is more flexible
You can use k-medoids with any similarity measure. but K-means may fail to converge because it must be used with distances that are consistent with the mean only. So e.g. Absolute Pearson Correlation can’t be used with k-means, but it works well with k-medoids.
2. Robustness of medoid
The medoid used by k-medoids is roughly comparable to the median. The median is more robust to outliers than the arithmetic mean. It is a more robust estimate of a representative point than the mean as used in k-means.
For example:
[1, 2, 3, 4, 100000]
In the above example, both the median and medoid of this set are 3. The mean is 20002.
The medoid is representing the most of data set, but the mean is getting deflected by outliers here. The median has a breakdown point of 50%, the mean has a breakdown point of 0.
3. k-medoids is much more expensive
K means is quite fast and less expensive than k medoid. K medoid computes all the pairwise distances, it is O(n^2*k*i), k-means runs in O(n*k*i), k times the number of iterations is k*i << n.
Hope this answer helps.