I'm trying to use SGD to classify a large dataset. As the data is too large to fit into memory, I'd like to use the partial_fit method to train the classifier. I have selected a sample of the dataset (100,000 rows) that fits into memory to test fit vs. partial_fit:
from sklearn.linear_model import SGDClassifier
def batches(l, n):
for i in xrange(0, len(l), n):
yield l[i:i+n]
clf1 = SGDClassifier(shuffle=True, loss='log')
clf1.fit(X, Y)
clf2 = SGDClassifier(shuffle=True, loss='log')
n_iter = 60
for n in range(n_iter):
for batch in batches(range(len(X)), 10000):
clf2.partial_fit(X[batch[0]:batch[-1]+1], Y[batch[0]:batch[-1]+1], classes=numpy.unique(Y))
I then test both classifiers with an identical test set. In the first case I get an accuracy of 100%. As I understand it, SGD by default passes 5 times over the training data (n_iter = 5).
In the second case, I have to pass 60 times over the data to reach the same accuracy.
Why this difference (5 vs. 60)? Or am I doing something wrong?