I have trouble understanding the difference (if there is one) between roc_auc_score() and auc() in scikit-learn.
I'm trying to predict a binary output with imbalanced classes (around 1.5% for Y=1).
model_logit = LogisticRegression(class_weight='auto')
false_positive_rate, true_positive_rate, thresholds = roc_curve(Y_test, clf.predict_proba(xtest)[:,1])
Somebody can explain this difference? I thought both were just calculating the area under the ROC curve. Might be because of the imbalanced dataset but I could not figure out why.