Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
in Machine Learning by (19k points)

I'm trying to use SGD to classify a large dataset. As the data is too large to fit into memory, I'd like to use the partial_fit method to train the classifier. I have selected a sample of the dataset (100,000 rows) that fits into memory to test fit vs. partial_fit:

from sklearn.linear_model import SGDClassifier

def batches(l, n):

    for i in xrange(0, len(l), n):

        yield l[i:i+n]

clf1 = SGDClassifier(shuffle=True, loss='log'), Y)

clf2 = SGDClassifier(shuffle=True, loss='log')

n_iter = 60

for n in range(n_iter):

    for batch in batches(range(len(X)), 10000):

        clf2.partial_fit(X[batch[0]:batch[-1]+1], Y[batch[0]:batch[-1]+1], classes=numpy.unique(Y))

I then test both classifiers with an identical test set. In the first case I get an accuracy of 100%. As I understand it, SGD by default passes 5 times over the training data (n_iter = 5).

In the second case, I have to pass 60 times over the data to reach the same accuracy.

Why this difference (5 vs. 60)? Or am I doing something wrong?

1 Answer

0 votes
by (33.1k points)

You should shuffle the training data between each iteration, by setting shuffle=True when the model is shuffling the data using partial _fit. 

You need to shuffle the training data between each iteration, as setting shuffle=True when instantiating the model will NOT shuffle the data when using partial_fit that applies only to fit. 

For example:

from sklearn.linear_model import SGDClassifier

import random

clf2 = SGDClassifier(loss='log') # shuffle=True is useless here

shuffledRange = range(len(X))

n_iter = 5

for n in range(n_iter):


    shuffledX = [X[i] for i in shuffledRange]

    shuffledY = [Y[i] for i in shuffledRange]

    for batch in batches(range(len(shuffledX)), 10000):

        clf2.partial_fit(shuffledX[batch[0]:batch[-1]+1], shuffledY[batch[0]:batch[-1]+1], classes=numpy.unique(Y))

I hope this answer might help.

Browse Categories