10
votes

Problem

I am trying to use scikit-learn's LogisticRegressionCV with roc_auc_score as the scoring metric.

from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score

clf = LogisticRegressionCV(scoring=roc_auc_score)

But when I attempt to fit the model (clf.fit(X, y)), it throws an error.

 ValueError: average has to be one of (None, 'micro', 'macro', 'weighted', 'samples')

That's cool. It's clear what's going on: roc_auc_score needs to be called with the average argument specified, per its documentation and the error above. So I tried that.

clf = LogisticRegressionCV(scoring=roc_auc_score(average='weighted'))

But it turns out that roc_auc_score can't be called with an optional argument alone, because this throws another error.

TypeError: roc_auc_score() takes at least 2 arguments (1 given)

Question

Any thoughts on how I can use roc_auc_score as the scoring metric for LogisticRegressionCV in a way that I can specify an argument for the scoring function?

I can't find an SO question on this issue or a discussion of this issue in scikit-learn's GitHub repo, but surely someone has run into this before?

3
According to the docs you linked to average has a default value of "macro", so that shouldn't be causing the error. - BrenBarn
Yeah, not sure why it's asking for a definition of that argument. I thought it might be the version I'm using (0.16.1), but the docs for that version show the same thing. - Gyan Veda

3 Answers

10
votes

You can use make_scorer, e.g.

from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_auc_score, make_scorer
from sklearn.datasets import make_classification

# some example data
X, y = make_classification()

# little hack to filter out Proba(y==1)
def roc_auc_score_proba(y_true, proba):
    return roc_auc_score(y_true, proba[:, 1])

# define your scorer
auc = make_scorer(roc_auc_score_proba, needs_proba=True)

# define your classifier
clf = LogisticRegressionCV(scoring=auc)

# train
clf.fit(X, y)

# have look at the scores
print clf.scores_
8
votes

I found a way to solve this problem!

scikit-learn offers a make_scorer function in its metrics module that allows a user to create a scoring object from one of its native scoring functions with arguments specified to non-default values (see here for more information on this function from the scikit-learn docs).

So, I created a scoring object with the average argument specified.

roc_auc_weighted = sk.metrics.make_scorer(sk.metrics.roc_auc_score, average='weighted')

Then, I passed that object in the call to LogisticRegressionCV and it ran without any issues!

clf = LogisticRegressionCV(scoring=roc_auc_weighted)
1
votes

A bit late (4 years later). But today you can use:

clf = LogisticRegressionCV(scoring='roc_auc')

Also, all other scoring keys can be obtained through:

from sklearn.metrics import SCORERS
print(SCORERS.keys())