In Random Forest method, for each tree we randomly select a set of variables (features) of fixed size. But once this set is frozen for that particular tree, does the tree behave like a regular decision tree algorithm?
I am assuming that random forest is nothing but generating a bunch of classical 'decision trees' and taking their votes towards the final classification. But in many places whatever description I have read seems to suggest that; for a given decision tree within the forest even at each node we randomly select variables. Is that the case?
Does it mean that at each node in the tree, we randomly select m variables from the variable set which is fixed for that tree? Or from the global variable set of the training dataset? And then from the selected set of variables we select 1 variable heuristically (e.g. whichever variable maximises information gain) -- is that a correct statement?