0 votes
1 view
in Data Science by (13.1k points)

I'm trying to create N balanced random subsamples of my large unbalanced dataset. Is there a way to do this simply with scikit-learn / pandas or do I have to implement it myself? Any pointers to code that does this?

These subsamples should be random and can be overlapping as I feed each to separate classifier in a very large ensemble of classifiers.

In Weka there is tool called spreadsubsample, is there equivalent in sklearn?http://wiki.pentaho.com/display/DATAMINING/SpreadSubsample

(I know about weighting but that's not what I'm looking for.)

1 Answer

0 votes
by (19.9k points)

Here is a simple solution to create N balanced random subsamples.

dataset = pd.read_csv("data.csv")

X = dataset.iloc[:, 1:12].values

y = dataset.iloc[:, 12].values

from imblearn.under_sampling import RandomUnderSampler

rus = RandomUnderSampler(return_indices=True)

X_rus, y_rus, id_rus = rus.fit_sample(X, y)

...