I am trying to use Random forest for my problem (below is a sample code for boston datasets, not for my data). I am planning to use GridSearchCV for hyperparameter tuning but what should be the range of values for different parameters? How will I know that the range I am selecting is the correct one?
I was reading about it on the internet and someone suggested to try "zoom in" on the optimum in a second grid-search (e.g. if it was 10 then try [5, 20, 50]).
Is this the right approach? Shall I use this approach for ALL the parameters required for random forest? This approach may miss a "good" combination, right?
import numpy as np
from sklearn.grid_search import GridSearchCV
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestRegressor
digits = load_boston()
X, y = dataset.data, dataset.target
model = RandomForestRegressor(random_state=30)
param_grid = { "n_estimators" : [250, 300],
"criterion" : ["gini", "entropy"],
"max_features" : [3, 5],
"max_depth" : [10, 20],
"min_samples_split" : [2, 4] ,
"bootstrap": [True, False]}
grid_search = GridSearchCV(clf, param_grid, n_jobs=-1, cv=2)
grid_search.fit(X, y)
print grid_search.best_params_