3
votes

first time poster here, so apologies for rookie errors

I am using the caret package in R for classification. I am fitting some models (GBM, linear SVM, NB, LDA) using repeated 10-fold cross validation over a training set. Using a custom trainControl, caret even gives me a whole range of model performance metrics like ROC, Spec/sens, Kappa, Accuracy over the test folds. That really is fantastic. There is just one more metric I would love to have: some measure of model calibration.

I noticed that there is a function within caret that can create a calibration plot to estimate the consistency of model performance across portions of your data. Is it possible to have caret compute this for each test-fold during the cross-validated model building? Or can it only be applied to some new held out data that we are making predictions on?

For some context, at the moment I have something like this:

fitControl <- trainControl(method = "repeatedcv", repeats=2, number = 10, classProbs = TRUE, summaryFunction = custom.summary)
gbmGrid <-  expand.grid(.interaction.depth = c(1,2,3),.n.trees = seq(100,800,by=100),.shrinkage = c(0.01))
gbmModel <- train(y= train_target, x = data.frame(t_train_predictors),
              method = "gbm",
              trControl = fitControl,
              tuneGrid = gbmGrid,
              verbose = FALSE)

If it helps, I am using ~25 numeric predictors and N=2,200, predicting a two-class factor.

Many thanks in advance for any help/advice. Adam

1

1 Answers

11
votes

The calibration function takes whatever data that you give it. You can get the resampled values from the train sub-object pred:

> set.seed(1)
> dat <- twoClassSim(2000)
> 
> set.seed(2)
> mod <- train(Class ~ ., data = dat, 
+              method = "lda",
+              trControl = trainControl(savePredictions = TRUE,
+                                       classProbs = TRUE))
> 
> str(mod$pred)
'data.frame':   18413 obs. of  7 variables:
 $ pred     : Factor w/ 2 levels "Class1","Class2": 1 2 2 1 1 2 1 1 2 1 ...
 $ obs      : Factor w/ 2 levels "Class1","Class2": 1 2 2 1 1 2 1 1 2 2 ...
 $ Class1   : num  0.631 0.018 0.138 0.686 0.926 ...
 $ Class2   : num  0.369 0.982 0.8616 0.3139 0.0744 ...
 $ rowIndex : int  1 3 4 10 12 13 18 22 25 27 ...
 $ parameter: Factor w/ 1 level "none": 1 1 1 1 1 1 1 1 1 1 ...
 $ Resample : chr  "Resample01" "Resample01" "Resample01" "Resample01" ...

Then you could use:

> cal <- calibration(obs ~ Class1, data = mod$pred)
> xyplot(cal)

Just keep in mind that, with many resampling methods, a single training set instance will be held-out multiple times:

> table(table(mod$pred$rowIndex))

  2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17 
  2  11  30  77 135 209 332 314 307 231 185  93  48  16   6   4 

You could average the class probabilities per rowIndex if you like.

Max