first time poster here, so apologies for rookie errors
I am using the caret package in R for classification. I am fitting some models (GBM, linear SVM, NB, LDA) using repeated 10-fold cross validation over a training set. Using a custom trainControl, caret even gives me a whole range of model performance metrics like ROC, Spec/sens, Kappa, Accuracy over the test folds. That really is fantastic. There is just one more metric I would love to have: some measure of model calibration.
I noticed that there is a function within caret that can create a calibration plot to estimate the consistency of model performance across portions of your data. Is it possible to have caret compute this for each test-fold during the cross-validated model building? Or can it only be applied to some new held out data that we are making predictions on?
For some context, at the moment I have something like this:
fitControl <- trainControl(method = "repeatedcv", repeats=2, number = 10, classProbs = TRUE, summaryFunction = custom.summary)
gbmGrid <- expand.grid(.interaction.depth = c(1,2,3),.n.trees = seq(100,800,by=100),.shrinkage = c(0.01))
gbmModel <- train(y= train_target, x = data.frame(t_train_predictors),
method = "gbm",
trControl = fitControl,
tuneGrid = gbmGrid,
verbose = FALSE)
If it helps, I am using ~25 numeric predictors and N=2,200, predicting a two-class factor.
Many thanks in advance for any help/advice. Adam