I have a follow-up question to this one. As in the initial question, I am using the mlr3verse, have a new dataset, and would like to make predictions using parameters that performed well during autotuning. The answer to that question says to use at$train(task). This seems to initiate tuning again. Does it take advantage of the nested resampling at all by using those parameters?
Also, looking at at$tuning_result there are two sets of parameters, one called tune_x and one called params. What is the difference between these?
Thanks.
Edit: example workflow added below
library(mlr3verse)
set.seed(56624)
task = tsk("mtcars")
learner = lrn("regr.xgboost")
tune_ps = ParamSet$new(list(
ParamDbl$new("eta", lower = .1, upper = .4),
ParamInt$new("max_depth", lower = 2, upper = 4)
))
at = AutoTuner$new(learner = learner,
resampling = rsmp("holdout"), # inner resampling
measures = msr("regr.mse"),
tune_ps = tune_ps,
terminator = term("evals", n_evals = 3),
tuner = tnr("random_search"))
rr = resample(task = task, learner = at, resampling = rsmp("cv", folds = 2),
store_models = TRUE)
rr$aggregate()
rr$score()
lapply(rr$learners, function(x) x$tuning_result)
at$train(task)
at$tuning_result
notreallynew.df = as.data.table(task)
at$predict_newdata(newdata = notreallynew.df)