Is it possible to generate a decision forest whose trees are exactly the same? Please note that this is an experimental question. As far as I understand random forests have two parameters that lead to the 'randomness' compared to a single decision tree:
1) number of features randomly sampled at each node of a decision tree, and
2) number of training examples drawn to create a tree.
Intuitively, if I set these two parameters to their maximum values, then I should be avoiding the 'randomness', hence each created tree should be exactly the same. Because all the trees would exactly be the same, I should be achieving the same results regardless the number of trees in the forest or different runs (i.e. different seed values).
I have tested this idea using the randomForest library within R. I think the two aforementioned parameters correspond to 'mtry' and 'sampsize' respectively. I have set these values to their maximum, but unfortunately there is still some randomness left, as the OOB-error estimates vary depending on the number of trees in the forest?!
Would you please help me understand how to remove all the randomness in a random decision forest, prefarably using the arguments of the randomForest library within R?