0 votes
1 view
in Machine Learning by (13.5k points)

I found several questions related to this, but no one solved my doubts. In particular, the two answers to this question confused me even more.

I'm training a linear SVM on top of a set of features - Convolutional Neural Net features resulting from images. I have, for example, a 3500x4096 X matrix with examples on rows and features on columns, as usual.

I'm wondering how to properly standardize/normalize this matrix before feeding the SVM. I see two ways (using sklearn):

Standardizing features. It results in features with 0 mean and unitary std.

X = sklearn.preprocessing.scale(X)

Normalizing features. It results in features with unitary norm.

X = sklearn.preprocessing.normalize(X, axis=0)

My results are sensibly better with normalization (76% accuracy) than with standarding (68% accuracy).

Is it a completely dataset-dependent choice? Or how can one choose between the two techniques?

1 Answer

0 votes
by (33.1k points)

You can choose the scaling scheme based on something that makes sense with your data. There are different ways of scaling, then which one you'll use depends on the data. Each scheme brings values of different features into comparable ranges, but each of them preserves different types of information. There are rational explanations behind the cause of some scaling schemes that are better suited for a specific case and using the one that works better.

StandardScaler

This method can be simply used for sklearn.preprocessing.scale(X) uses. It assumes that your features are normally distributed scales them such that each feature's Gaussian distribution is now centered around 0 and its standard deviation is 1.

Calculating the mean and stdev for each feature, then converts each actual value for the feature into a z-score. How many stdevs away from the mean is this value? 

z=(value-mean)/stdev

Hope this answer helps you!

Welcome to Intellipaat Community. Get your technical queries answered by top developers !


Categories

...