K-means is one of the simplest unsupervised learning algorithms that solve the well-known clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) and has a low computational cost.

The shortcoming of k-means is that the value of K(number of groups/clusters) must be determined beforehand. K-means is a greedy algorithm and is hard to attain the global optimum clustering results.

In K-means the nodes (centroids) are independent of each other, clusters are formed through centroid(nodes) and cluster size.

Whereas in SOM(Self Organizing Maps), the number of neurons of the output layer has a close relationship with the class number in the input stack. In this, the clusters are formed geometrically.

From the performance point of view, the K-means algorithm performs better than SOM if the number of clusters increases. K-means is more sensitive to the noise present in the dataset compared to SOM.

**To learn more on Data Science, visit our free Data Science tutorial.**