1
votes

MLR3 is really cool. I am trying to tune the regularisation prarameter

searchspace_glmnet_trafo = ParamSet$new(list(
  ParamDbl$new("regr.glmnet.lambda", log(0.01), log(10))
))
searchspace_glmnet_trafo$trafo = function(x, param_set) {
  x$regr.glmnet.lambda = (exp(x$regr.glmnet.lambda))
  x
}

but get the error

Error in glmnet::cv.glmnet(x = data, y = target, family = "gaussian", : Need more than one value of lambda for cv.glmnet

A minimum non-working example is below. Any help is greatly appreciated.

library(mlr3verse)
data("kc_housing", package = "mlr3data")

library(anytime)
dates = anytime(kc_housing$date)
kc_housing$date = as.numeric(difftime(dates, min(dates), units = "days"))
kc_housing$zipcode = as.factor(kc_housing$zipcode)
kc_housing$renovated = as.numeric(!is.na(kc_housing$yr_renovated))
kc_housing$has_basement = as.numeric(!is.na(kc_housing$sqft_basement))

kc_housing$id = NULL
kc_housing$price = kc_housing$price / 1000
kc_housing$yr_renovated = NULL
kc_housing$sqft_basement = NULL
lrnglm=lrn("regr.glmnet")
kc_housing
tsk = TaskRegr$new("sales", kc_housing, target = "price")
fencoder = po("encode", method = "treatment",
              affect_columns = selector_type("factor"))
pipe = fencoder %>>% lrnglm

glearner = GraphLearner$new(pipe)
glearner$train(tsk)


searchspace_glmnet_trafo = ParamSet$new(list(
  ParamDbl$new("regr.glmnet.lambda", log(0.01), log(10))
))
searchspace_glmnet_trafo$trafo = function(x, param_set) {
  x$regr.glmnet.lambda = (exp(x$regr.glmnet.lambda))
  x
}
inst = TuningInstance$new(
  tsk, glearner,
  rsmp("cv"), msr("regr.mse"),
  searchspace_glmnet_trafo, term("evals", n_evals = 100)
)
gsearch = tnr("grid_search", resolution = 100)
gsearch$tune(inst)
1
Did my answer help you or did you solve your problem differently?pat-s

1 Answers

1
votes

lambda needs to be a vector param, not a single value (as the message tells).

I suggest to not tune cv.glmnet. This algorithm does an internal 10-fold CV optimization and relies on its own sequence for lambda. Consult the help page of the learner for more information.

You can apply your own tuning (tuning of param s, not lambda) on glmnet::glmnet(). However, this algorithm is not (yet) available for use with {mlr3}.