I have some trouble grasping the standard way of how to use cross validation for hyperparameter tuning and evaluation. I try to do 10-fold CV. Which of the following is the correct way?
All of the data gets used for parameter tuning (e. g. using random grid search with cross validation). This returns the best hyperparameters. Then, a new model is constructed with these hyperparameters, and it can be evaluated by doing a cross validation (nine folds for training, one for testing, in the end the metrics like accuracy or confusion matrix get averaged).
Another way that I found is to first split the data into a train and a test set, and then only perform cross validation on the training set. One then would evaluate using the testset. However, as I understand, that would undermine the whole concept of cross validation, as the idea behind it is to be independent of the split, right?
Lastly, my supervisor told me that I would use eight folds for training, one for hyperparameter estimation and one for testing (and therefore evaluation). However, I could not find any material where this approach had been used. Is that a standard procedure or have I just understood something wrong there?
