K-fold cross-validation is a process of resampling, that is used to evaluate the machine learning algorithms on a particular sample dataset. This procedure has a parameter called K which is used to divide the sample dataset as required.
Advantages:
1. Even with a small dataset, a good model can be developed.
2. With the help of K in KNN, alpha in Naive Bayes, we can achieve optimal values for our hyper-parameters, which gives us the best performance.
Disadvantages:
1. If we train the model k times, then the time taken to compute optimal hyperparameter increases by K-times
If you are interested to learn Data Science, visit and get Data Science certification