Perhaps this is too long-winded. Simple question about sklearn's random forest:
For a true/false classification problem, is there a way in sklearn's random forest to specify the sample size used to train each tree, along with the ratio of true to false observations?
More details are below:
In the R implementation of random forest, called randomForest, there's an option sampsize()
. This allows you to balance the sample used to train each tree based on the outcome.
For example, if you're trying to predict whether an outcome is true or false and 90% of the outcomes in the training set are false, you can set sampsize(500, 500)
. This means that each tree will be trained on a random sample (with replacement) from the training set with 500 true and 500 false observations. In these situations, I've found models perform much better predicting true outcomes when using a 50% cut-off, yielding much higher kappas.
It doesn't seem like there is an option for this in the sklearn implementation.
- Is there any way to mimic this functionality in sklearn?
- Would simply optimizing the cut-off based on the Kappa statistic achieve a similar result or is something lost in this approach?