Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I'm new to SVMs, and I'm trying to use the Python interface to libsvm to classify a sample containing a mean and stddev. However, I'm getting nonsensical results.

Is this task inappropriate for SVMs or is there an error in my use of libsvm? Below is the simple Python script I'm using to test:

#!/usr/bin/env python

# Simple classifier test.

# Adapted from the svm_test.py file included in the standard libsvm distribution.

from collections import defaultdict

from SVM import *

# Define our sparse data formatted training and testing sets.

labels = [1,2,3,4]

train = [ # key: 0=mean, 1=stddev

    {0:2.5,1:3.5},

    {0:5,1:1.2},

    {0:7,1:3.3},

    {0:10.3,1:0.3},

]

problem = svm_problem(labels, train)

test = [

    ({0:3, 1:3.11},1),

    ({0:7.3,1:3.1},3),

    ({0:7,1:3.3},3),

    ({0:9.8,1:0.5},4),

]

# Test classifiers.

kernels = [LINEAR, POLY, RBF]

kname = ['linear','polynomial','rbf']

correct = defaultdict(int)

for kn,kt in zip(kname,kernels):

    print kt

    param = svm_parameter(kernel_type = kt, C=10, probability = 1)

    model = svm_model(problem, param)

    for test_sample,correct_label in test:

        pred_label, pred_probability = model.predict_probability(test_sample)

        correct[kn] += pred_label == correct_label

# Show results.

print '-'*80

print 'Accuracy:'

for kn,correct_count in correct.iteritems():

    print '\t',kn, '%.6f (%i of %i)' % (correct_count/float(len(test)), correct_count, len(test))

The domain seems fairly simple. I'd expect that if it's trained to know a mean of 2.5 means label 1, then when it sees a mean of 2.4, it should return label 1 as the most likely classification. However, each kernel has an accuracy of 0%. Why is this?

A couple of side notes, is there a way to hide all the verbose training output dumped by libsvm in the terminal? I've searched libsvm's docs and code, but I can't find any way to turn this off.

Also, I had wanted to use simple strings as the keys in my sparse dataset (e.g. {'mean':2.5, 'stddev':3.5}). Unfortunately, libsvm only supports integers. I tried using the long integer representation of the string (e.g. 'mean' == 1109110110971110), but libsvm seems to truncate these to normal 32-bit integers. The only workaround I see is to maintain a separate "key" file that maps each string to an integer ('mean'=0, 'stddev'=1). But obviously, that'll be a pain since I'll have to maintain and persist a second file along with the serialized classifier. Does anyone see an easier way?

1 Answer

0 votes
by (107k points)

Your code seems to work, if you exclude the probability estimate (i.e., delete "probability = 1", change predict_probability to just predict, and delete pred_probability).

<snip>

# Test classifiers.

kernels = [LINEAR, POLY, RBF]

kname = ['linear','polynomial','rbf']

correct = defaultdict(int)

for kn,kt in zip(kname,kernels):

  print kt

  param = svm_parameter(kernel_type = kt, C=10) # Here -> rm probability = 1

  model = svm_model(problem, param)

  for test_sample,correct_label in test:

      # Here -> change predict_probability to just predict

      pred_label = model.predict(test_sample)

      correct[kn] += pred_label == correct_label

</snip>

With this change, you will get:

Accuracy:

        polynomial 1.000000 (4 of 4)

        rbf 1.000000 (4 of 4)

        linear 1.000000 (4 of 4)

Prediction with probability estimates does work if you double up the data in the training set means including each data point twice.

If you are looking to learn more about Artificial Intelligence then you visit Artificial Intelligence Tutorial. Also, if you are appearing for job profiles of AI Engineer or AI Expert then you can prepare for the interviews on Artificial Intelligence Interview Questions.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...