Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
4 views
in AI and Deep Learning by (50.2k points)

Nominally a good problem to have, but I'm pretty sure it is because something funny is going on...

As context, I'm working on a problem in the facial expression/recognition space, so getting 100% accuracy seems incredibly implausible (not that it would be plausible in most applications...). I'm guessing there is either some consistent bias in the data set that is making it overly easy for an SVM to pull out the answer, =or=, more likely, I've done something wrong on the SVM side.

I'm looking for suggestions to help understand what is going on--is it me (=my usage of LibSVM)? Or is it the data?

The details:

  • About ~2500 labeled data vectors/instances (transformed video frames of individuals--<20 individual persons total), binary classification problem. ~900 features/instance. Unbalanced data set at about a 1:4 ratio.

  • Ran subset.py to separate the data into test (500 instances) and train (remaining).

  • Ran "svm-train -t 0 ". (Note: apparently no need for '-w1 1 -w-1 4'...)

  • Ran svm-predict on the test file. Accuracy=100%!

Things tried:

  • Checked about 10 times over that I'm not training & testing on the same data files, through some inadvertent command-line argument error

  • re-ran subset.py (even with -s 1) multiple times and did train/test only multiple different data sets (in case I randomly upon the most magical train/test pa

  • ran a simple diff-like check to confirm that the test file is not a subset of the training data

  • SVM-scale on the data has no effect on accuracy (accuracy=100%). (Although the number of support vectors does drop from nSV=127, bSV=64 to nBSV=72, bSV=0.)

  • ((weird)) using the default RBF kernel (vice linear -- i.e., removing '-t 0') results in accuracy going to garbage(?!)

  • (sanity check) running SVM-predict using a model trained on a scaled data set against an unscaled data set results in accuracy = 80% (i.e., it always guesses the dominant class). This is strictly a sanity check to make sure that somehow SVM-predict is nominally acting right on my machine.

Tentative conclusion?:

Something with the data is wacked--somehow, within the data set, there is a subtle, experimenter-driven effect that the SVM is picking up on.

(This doesn't, on the first pass, explain why the RBF kernel gives garbage results, however.)

I would greatly appreciate any suggestions on a) how to fix my usage of LibSVM (if that is actually the problem) or b) determine what subtle experimenter-bias in the data LibSVM is picking up on.

1 Answer

0 votes
by (107k points)

100% is very uncommon and generally, does not occur in standard classification tasks.

Either your recognition problem is rather easy, your test and training data are too much alike compared to practical scenarios, or you are actually re-classifying your training data in the test step. In the latter case, 100% classification accuracy can easily result in classifiers with many "parameters" (i.e. high capacity) such as e.g. the nearest-neighbor classifier.

It is possible that your classifier is overfitting the training set. Therefore, to avoid that, you need to perform your classification process under the evaluation of 10-fold cross-validation. It can also happen that your training and test data sets are very much alike. Generally, 70% of data is used for testing and the remaining 30% for training your classification algorithm. I think you should recompute the results and try to calculate the average of your set of observations. 10-fold cross-validation is a good choice for testing.

However, relying only on classification accuracy when evaluating certain learning methods is not enough, you need to consider additional evaluation metrics such as Confusion Matrix, ROC, etc.

LIBSVM and LIBLINEAR are two popular open-source machine learning libraries so if you wish to know more about Machine Learning visit this Machine Learning Course.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...