0 votes
1 view
in Machine Learning by (12.5k points)

I know SVMs are supposed 'ANN killers' in that they automatically select representation complexity and find a global optimum (see here for some SVM praising quotes).

But here is where I'm unclear -- do all of these claims of superiority hold for just the case of a 2 class decision problem or do they go further? (I assume they hold for non-linearly separable classes or else no-one would care)

So a sample of some of the cases I'd like to be cleared up:

Are SVMs better than ANNs with many classes?

in an online setting?

What about in a semi-supervised case like reinforcement learning?

Is there a better-unsupervised version of SVMs?

I don't expect someone to answer all of these lil' subquestions, but rather to give some general bounds for when SVMs are better than the common ANN equivalents (e.g. FFBP, recurrent BP, Boltzmann machines, SOMs, etc.) in practice, and preferably, in theory as well.

1 Answer

0 votes
by (32.8k points)

The primary feature of an SVM is the separating maximum-margin hyperplane whose position is determined by changing its distance from the support vectors. SVMs are mainly used for multi-class classification because SVM can work in  "one against many" pattern, so the training data is shown to the first SVM which classifies those instances as either "Class I" or "not Class I".    

image

SVMs are not used in an online learning setting (i.e., incremental training). The purpose of using SVMs is the separating hyperplane whose position is determined by a small number of support vectors. A single additional data point could in principle significantly influence the position of this hyperplane. 

Artificial neural networks are more expensive to use. A simple neural network is quite similar to Support Vector Machines. Due to recent work in deep learning, Artificial neural networks outperformed SVM’s. They can adapt in most of the prediction tasks to give better results.

Hope this answer helps.

...