Explore Courses Blog Tutorials Interview Questions
0 votes
in AI and Deep Learning by (50.2k points)

I've got a classification problem in my hand, which I'd like to address with a machine learning algorithm ( Bayes, or Markovian probably, the question is independent of the classifier to be used). Given a number of training instances, I'm looking for a way to measure the performance of an implemented classification, by taking data overfitting problems into account.

That is: given N[1..100] training samples, if I run the training algorithm on every one of the samples, and use this very same samples to measure fitness, it might get stuck into a data overfitting problem -the classifier will know the exact answers for the training instances, without having much predictive power, rendering the fitness results useless.

An obvious solution would be separating the hand-tagged samples into training, and test samples; and I'd like to learn about methods selecting the statistically significant samples for training.

White papers, book pointers, and PDFs much appreciated!

1 Answer

0 votes
by (108k points)

There are many performance metrics that are used to evaluate different Machine Learning Algorithms. We will be now focusing on the ones that are used for Classification problems. We can use many classification performance metrics such as Log-Loss, Accuracy, AUC(Area under Curve), etc. Another example of a metric for evaluation is the precision, which can be used for sorting algorithms primarily used by search engines. you can  refer the following link for better understanding of the Performance Metrics for Classification problems in Machine Learning:

You can also use 10-fold Cross-validation for this query. It's a pretty standard approach for classification algorithm performance evaluation.

The basic idea is to divide your learning samples into 10 subsets. Then use one subset for test data and others for train data. Repeat this for each subset and calculate average performance at the end.

Browse Categories