1
votes

I am getting different score values from different estimators from scikit.

  1. SVR(kernel='rbf', C=1e5, gamma=0.1) 0.97368549023058548
  2. Linear regression 0.80539997869990632
  3. DecisionTreeRegressor(max_depth = 5) 0.83165426563946387

Since all regression estimators should use R-square score, I think they are comparable, i.e. the closer the score is to 1, the better the model is trained. However, each model implements score function separately so I am not sure. Please clarify.

1
What does this have to do with python?stelioslogothetis
I forgot to mention. These values are obtained using scikit and python is used to slice and dice data.dknight
In scikit-learn, all regression estimators use the "R_Squared" metric by default for the score() function, so its perfectly fine to compare the scores of these 3 you have mentioned, if scorer is not manually changed and used on same data.Vivek Kumar

1 Answers

4
votes

If you have a similar pipeline to feed the same data into the models, then the metrics are comparable. You can choose the SVR Model without any doubt.

By the way, it could be really interesting for you to "redevelop" this "R_squared" Metric, it could be a nice way to learn the underlying mechanic.