3
votes

I use a same dataset to train logistic regression model both in R and python sklearn. The dataset is unbalanced. And I find that the auc is quite different. This is the code of python:

model_logistic = linear_model.LogisticRegression() #auc 0.623
model_logistic.fit(train_x, train_y)
pred_logistic = model_logistic.predict(test_x) #mean:0.0235 var:0.023
print "logistic auc: ", sklearn.metrics.roc_auc_score(test_y,pred_logistic)

This is the code of R:

glm_fit <- glm(label ~ watch_cnt_7 + bid_cnt_7 + vi_cnt_itm_1 + 
ITEM_PRICE  + add_to_cart_cnt_7 + offer_cnt_7 +     
 dwell_dlta_4to2 + 
 vi_cnt_itm_2 + asq_cnt_7 + watch_cnt_14to7 + dwell_dlta_6to4 + 
auct_type + vi_cnt_itm_3 + vi_cnt_itm_7 + vi_dlta_4to2 + 
vi_cnt_itm_4 + vi_dlta_6to4 + tenure + sum_SRCH_item_7 + 
vi_cnt_itm_6 + dwell_itm_3 + 
offer_cnt_14to7 + #
dwell_itm_2 + dwell_itm_6 + CNDTN_ROLLUP_ID +
dwell_itm_5 + dwell_itm_4 + dwell_itm_1+ 
bid_cnt_14to7 + item_prchsd_cnt_14to7 +  #
dwell_itm_7  + median_day_rate + vb_ratio
, data = train, family=binomial())
p_lm<-predict(glm_fit, test[1:nc-1],type = "response" )
pred_lm <- prediction(p_lm,test$label)
auc <- performance(pred_lm,'auc')@y.values

The auc of python is 0.623 while the R is 0.887. So I want to know what's wrong with sklearn logistic regression and how to fix it. Thanks.

1
for one, 1:nc-1 is wrongrawr
That error won't matter much. 1:nc-1 will be the same as 0:(nc-1), which, used as a selection, will be equivalent to 1:(nc-1).nograpes
It would be helpful if you created a small reproducible example. For example, you could create a small simple dataset in R, which you then read in both in Python and R and run. Additionally, it would be helpful to confirm that the coefficients of the logistic regression were the same in both models.nograpes
I have change the params of the model before training: model = linear_model.LogisticRegression(C=10000,class_weight='auto', random_state=42,fit_intercept=True) and the new auc becomes 0.83.yanachen

1 Answers

1
votes

In the python script, you should use predict_proba to get the probability estimates for both classes and take the second column for the positive class as the input for roc_auc_score because ROC curve is plotted by varying the probability thresholds.

pred_logistic = model_logistic.predict_proba(test_x)[:,1]