I would like to build a decision tree classifier using Python, but I want to force the tree, regardless of what it thinks is best, to only split one node each time into two leaves. That is, each time, a node splits into a terminal leaf and another interior node that will continue to split, rather than into two interior nodes that can themselves split. I want one of the splits to end in a termination each time, until you end up having two leaves with below the minimum number.
For instance, the following tree satisfies this requirement
The reason I want to do this is to obtain a nested set of splits of observations. I saw on another post (Finding a corresponding leaf node for each data point in a decision tree (scikit-learn)) that the node IDs of observations can be found, which is crucial. I realize I can do this by building a tree without such a restriction, and going up one of the leaf nodes to the top, but this may not give enough observations, and I would essentially like to get this nested structure over all the observations in the dataset.
In my application I actually don't care about the classification task, I just want to obtain this nested set of observations formed by splits on features. I had planned on making the target variable randomly generated so the split on features would not be meaningful (which is counter-intuitive that this is what I want, but I'm using it for a different purpose). Alternatively, if someone knows of a similar binary split method in Python which achieves the same purpose, please let me know.