3
votes

I tried to use H2O to create some machine learning models for binary classification problem, and the test results are pretty good. But then I checked and found something weird. I tried to print the prediction of the model for the test set out of curiosity. And I found out that my model actually predicts 0 (negative) all the time, but the AUC is around 0.65, and precision is not 0.0. Then I tried to use Scikit-learn just to compare the metrics scores, and (as expected) they’re different. The Scikit learn yielded 0.0 precision and 0.5 AUC score, which I think is correct. Here's the code that I used:

model = h2o.load_model(model_path)
predictions = model.predict(Test_data).as_data_frame()

# H2O version to print the AUC score
auc = model.model_performance(Test_data).auc()

# Python version to print the AUC score
auc_sklearn = sklearn.metrics.roc_auc_score(y_true, predictions['predict'].tolist())

Any thought? Thanks in advance!

1
Just speculating rather than giving a full answer, Scikit Learn probably normalizes against biased class sizes.Hans Musgrave

1 Answers

5
votes

There is no difference between H2O and scikit-learn scoring, you just need to understand how to make sense of the output so you can compare them accurately.

If you'll look at the data in predictions['predict'] you'll see that it's a predicted class, not a raw predicted value. AUC uses the latter, so you'll need to use the correct column. See below:

import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
h2o.init()

# Import a sample binary outcome train/test set into H2O
train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
test = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_test_5k.csv")

# Identify predictors and response
x = train.columns
y = "response"
x.remove(y)

# For binary classification, response should be a factor
train[y] = train[y].asfactor()
test[y] = test[y].asfactor()

# Train and cross-validate a GBM
model = H2OGradientBoostingEstimator(distribution="bernoulli", seed=1)
model.train(x=x, y=y, training_frame=train)

# Test AUC
model.model_performance(test).auc()
# 0.7817203808052897

# Generate predictions on a test set
pred = model.predict(test)

Examine the output:

In [4]: pred.head()
Out[4]:
  predict        p0        p1
---------  --------  --------
        0  0.715077  0.284923
        0  0.778536  0.221464
        0  0.580118  0.419882
        1  0.316875  0.683125
        0  0.71118   0.28882
        1  0.342766  0.657234
        1  0.297636  0.702364
        0  0.594192  0.405808
        1  0.513834  0.486166
        0  0.70859   0.29141

[10 rows x 3 columns]

Now compare to sklearn:

from sklearn.metrics import roc_auc_score

pred_df = pred.as_data_frame()
y_true = test[y].as_data_frame()

roc_auc_score(y_true, pred_df['p1'].tolist())
# 0.78170751032654806

Here you see that they are approximately the same. AUC is an approximate method, so you'll see differences after a few decimal places when you compare different implementations.