Can I extract the underlying decision-rules (or 'decision paths') from a trained tree in a decision tree as a textual list?
Something like:
if A>0.4 then if B<0.2 then if C>0.8 then class='X'
Can I extract the underlying decision-rules (or 'decision paths') from a trained tree in a decision tree as a textual list?
Something like:
if A>0.4 then if B<0.2 then if C>0.8 then class='X'
I believe that this answer is more correct than the other answers here:
from sklearn.tree import _tree
def tree_to_code(tree, feature_names):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
for i in tree_.feature
]
print "def tree({}):".format(", ".join(feature_names))
def recurse(node, depth):
indent = " " * depth
if tree_.feature[node] != _tree.TREE_UNDEFINED:
name = feature_name[node]
threshold = tree_.threshold[node]
print "{}if {} <= {}:".format(indent, name, threshold)
recurse(tree_.children_left[node], depth + 1)
print "{}else: # if {} > {}".format(indent, name, threshold)
recurse(tree_.children_right[node], depth + 1)
else:
print "{}return {}".format(indent, tree_.value[node])
recurse(0, 1)
This prints out a valid Python function. Here's an example output for a tree that is trying to return its input, a number between 0 and 10.
def tree(f0):
if f0 <= 6.0:
if f0 <= 1.5:
return [[ 0.]]
else: # if f0 > 1.5
if f0 <= 4.5:
if f0 <= 3.5:
return [[ 3.]]
else: # if f0 > 3.5
return [[ 4.]]
else: # if f0 > 4.5
return [[ 5.]]
else: # if f0 > 6.0
if f0 <= 8.5:
if f0 <= 7.5:
return [[ 7.]]
else: # if f0 > 7.5
return [[ 8.]]
else: # if f0 > 8.5
return [[ 9.]]
Here are some stumbling blocks that I see in other answers:
tree_.threshold == -2
to decide whether a node is a leaf isn't a good idea. What if it's a real decision node with a threshold of -2? Instead, you should look at tree.feature
or tree.children_*
.features = [feature_names[i] for i in tree_.feature]
crashes with my version of sklearn, because some values of tree.tree_.feature
are -2 (specifically for leaf nodes).I created my own function to extract the rules from the decision trees created by sklearn:
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
# dummy data:
df = pd.DataFrame({'col1':[0,1,2,3],'col2':[3,4,5,6],'dv':[0,1,0,1]})
# create decision tree
dt = DecisionTreeClassifier(max_depth=5, min_samples_leaf=1)
dt.fit(df.ix[:,:2], df.dv)
This function first starts with the nodes (identified by -1 in the child arrays) and then recursively finds the parents. I call this a node's 'lineage'. Along the way, I grab the values I need to create if/then/else SAS logic:
def get_lineage(tree, feature_names):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
# get ids of child nodes
idx = np.argwhere(left == -1)[:,0]
def recurse(left, right, child, lineage=None):
if lineage is None:
lineage = [child]
if child in left:
parent = np.where(left == child)[0].item()
split = 'l'
else:
parent = np.where(right == child)[0].item()
split = 'r'
lineage.append((parent, split, threshold[parent], features[parent]))
if parent == 0:
lineage.reverse()
return lineage
else:
return recurse(left, right, parent, lineage)
for child in idx:
for node in recurse(left, right, child):
print node
The sets of tuples below contain everything I need to create SAS if/then/else statements. I do not like using do
blocks in SAS which is why I create logic describing a node's entire path. The single integer after the tuples is the ID of the terminal node in a path. All of the preceding tuples combine to create that node.
In [1]: get_lineage(dt, df.columns)
(0, 'l', 0.5, 'col1')
1
(0, 'r', 0.5, 'col1')
(2, 'l', 4.5, 'col2')
3
(0, 'r', 0.5, 'col1')
(2, 'r', 4.5, 'col2')
(4, 'l', 2.5, 'col1')
5
(0, 'r', 0.5, 'col1')
(2, 'r', 4.5, 'col2')
(4, 'r', 2.5, 'col1')
6
I modified the code submitted by Zelazny7 to print some pseudocode:
def get_code(tree, feature_names):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
value = tree.tree_.value
def recurse(left, right, threshold, features, node):
if (threshold[node] != -2):
print "if ( " + features[node] + " <= " + str(threshold[node]) + " ) {"
if left[node] != -1:
recurse (left, right, threshold, features,left[node])
print "} else {"
if right[node] != -1:
recurse (left, right, threshold, features,right[node])
print "}"
else:
print "return " + str(value[node])
recurse(left, right, threshold, features, 0)
if you call get_code(dt, df.columns)
on the same example you will obtain:
if ( col1 <= 0.5 ) {
return [[ 1. 0.]]
} else {
if ( col2 <= 4.5 ) {
return [[ 0. 1.]]
} else {
if ( col1 <= 2.5 ) {
return [[ 1. 0.]]
} else {
return [[ 0. 1.]]
}
}
}
Scikit learn introduced a delicious new method called export_text
in version 0.21 (May 2019) to extract the rules from a tree. Documentation here. It's no longer necessary to create a custom function.
Once you've fit your model, you just need two lines of code. First, import export_text
:
from sklearn.tree import export_text
Second, create an object that will contain your rules. To make the rules look more readable, use the feature_names
argument and pass a list of your feature names. For example, if your model is called model
and your features are named in a dataframe called X_train
, you could create an object called tree_rules
:
tree_rules = export_text(model, feature_names=list(X_train.columns))
Then just print or save tree_rules
. Your output will look like this:
|--- Age <= 0.63
| |--- EstimatedSalary <= 0.61
| | |--- Age <= -0.16
| | | |--- class: 0
| | |--- Age > -0.16
| | | |--- EstimatedSalary <= -0.06
| | | | |--- class: 0
| | | |--- EstimatedSalary > -0.06
| | | | |--- EstimatedSalary <= 0.40
| | | | | |--- EstimatedSalary <= 0.03
| | | | | | |--- class: 1
There is a new DecisionTreeClassifier
method, decision_path
, in the 0.18.0 release. The developers provide an extensive (well-documented) walkthrough.
The first section of code in the walkthrough that prints the tree structure seems to be OK. However, I modified the code in the second section to interrogate one sample. My changes denoted with # <--
Edit The changes marked by # <--
in the code below have since been updated in walkthrough link after the errors were pointed out in pull requests #8653 and #10951. It's much easier to follow along now.
sample_id = 0
node_index = node_indicator.indices[node_indicator.indptr[sample_id]:
node_indicator.indptr[sample_id + 1]]
print('Rules used to predict sample %s: ' % sample_id)
for node_id in node_index:
if leave_id[sample_id] == node_id: # <-- changed != to ==
#continue # <-- comment out
print("leaf node {} reached, no decision here".format(leave_id[sample_id])) # <--
else: # < -- added else to iterate through decision nodes
if (X_test[sample_id, feature[node_id]] <= threshold[node_id]):
threshold_sign = "<="
else:
threshold_sign = ">"
print("decision id node %s : (X[%s, %s] (= %s) %s %s)"
% (node_id,
sample_id,
feature[node_id],
X_test[sample_id, feature[node_id]], # <-- changed i to sample_id
threshold_sign,
threshold[node_id]))
Rules used to predict sample 0:
decision id node 0 : (X[0, 3] (= 2.4) > 0.800000011921)
decision id node 2 : (X[0, 2] (= 5.1) > 4.94999980927)
leaf node 4 reached, no decision here
Change the sample_id
to see the decision paths for other samples. I haven't asked the developers about these changes, just seemed more intuitive when working through the example.
from StringIO import StringIO
out = StringIO()
out = tree.export_graphviz(clf, out_file=out)
print out.getvalue()
You can see a digraph Tree. Then, clf.tree_.feature
and clf.tree_.value
are array of nodes splitting feature and array of nodes values respectively. You can refer to more details from this github source.
Just because everyone was so helpful I'll just add a modification to Zelazny7 and Daniele's beautiful solutions. This one is for python 2.7, with tabs to make it more readable:
def get_code(tree, feature_names, tabdepth=0):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
value = tree.tree_.value
def recurse(left, right, threshold, features, node, tabdepth=0):
if (threshold[node] != -2):
print '\t' * tabdepth,
print "if ( " + features[node] + " <= " + str(threshold[node]) + " ) {"
if left[node] != -1:
recurse (left, right, threshold, features,left[node], tabdepth+1)
print '\t' * tabdepth,
print "} else {"
if right[node] != -1:
recurse (left, right, threshold, features,right[node], tabdepth+1)
print '\t' * tabdepth,
print "}"
else:
print '\t' * tabdepth,
print "return " + str(value[node])
recurse(left, right, threshold, features, 0)
I've been going through this, but i needed the rules to be written in this format
if A>0.4 then if B<0.2 then if C>0.8 then class='X'
So I adapted the answer of @paulkernfeld (thanks) that you can customize to your need
def tree_to_code(tree, feature_names, Y):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
for i in tree_.feature
]
pathto=dict()
global k
k = 0
def recurse(node, depth, parent):
global k
indent = " " * depth
if tree_.feature[node] != _tree.TREE_UNDEFINED:
name = feature_name[node]
threshold = tree_.threshold[node]
s= "{} <= {} ".format( name, threshold, node )
if node == 0:
pathto[node]=s
else:
pathto[node]=pathto[parent]+' & ' +s
recurse(tree_.children_left[node], depth + 1, node)
s="{} > {}".format( name, threshold)
if node == 0:
pathto[node]=s
else:
pathto[node]=pathto[parent]+' & ' +s
recurse(tree_.children_right[node], depth + 1, node)
else:
k=k+1
print(k,')',pathto[parent], tree_.value[node])
recurse(0, 1, 0)
This is the code you need
I have modified the top liked code to indent in a jupyter notebook python 3 correctly
import numpy as np
from sklearn.tree import _tree
def tree_to_code(tree, feature_names):
tree_ = tree.tree_
feature_name = [feature_names[i]
if i != _tree.TREE_UNDEFINED else "undefined!"
for i in tree_.feature]
print("def tree({}):".format(", ".join(feature_names)))
def recurse(node, depth):
indent = " " * depth
if tree_.feature[node] != _tree.TREE_UNDEFINED:
name = feature_name[node]
threshold = tree_.threshold[node]
print("{}if {} <= {}:".format(indent, name, threshold))
recurse(tree_.children_left[node], depth + 1)
print("{}else: # if {} > {}".format(indent, name, threshold))
recurse(tree_.children_right[node], depth + 1)
else:
print("{}return {}".format(indent, np.argmax(tree_.value[node])))
recurse(0, 1)
Codes below is my approach under anaconda python 2.7 plus a package name "pydot-ng" to making a PDF file with decision rules. I hope it is helpful.
from sklearn import tree
clf = tree.DecisionTreeClassifier(max_leaf_nodes=n)
clf_ = clf.fit(X, data_y)
feature_names = X.columns
class_name = clf_.classes_.astype(int).astype(str)
def output_pdf(clf_, name):
from sklearn import tree
from sklearn.externals.six import StringIO
import pydot_ng as pydot
dot_data = StringIO()
tree.export_graphviz(clf_, out_file=dot_data,
feature_names=feature_names,
class_names=class_name,
filled=True, rounded=True,
special_characters=True,
node_ids=1,)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("%s.pdf"%name)
output_pdf(clf_, name='filename%s'%n)
Here is a way to translate the whole tree into a single (not necessarily too human-readable) python expression using the SKompiler library:
from skompiler import skompile
skompile(dtree.predict).to('python/code')
This builds on @paulkernfeld 's answer. If you have a dataframe X with your features and a target dataframe y with your resonses and you you want to get an idea which y value ended in which node (and also ant to plot it accordingly) you can do the following:
def tree_to_code(tree, feature_names):
from sklearn.tree import _tree
codelines = []
codelines.append('def get_cat(X_tmp):\n')
codelines.append(' catout = []\n')
codelines.append(' for codelines in range(0,X_tmp.shape[0]):\n')
codelines.append(' Xin = X_tmp.iloc[codelines]\n')
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
for i in tree_.feature
]
#print "def tree({}):".format(", ".join(feature_names))
def recurse(node, depth):
indent = " " * depth
if tree_.feature[node] != _tree.TREE_UNDEFINED:
name = feature_name[node]
threshold = tree_.threshold[node]
codelines.append ('{}if Xin["{}"] <= {}:\n'.format(indent, name, threshold))
recurse(tree_.children_left[node], depth + 1)
codelines.append( '{}else: # if Xin["{}"] > {}\n'.format(indent, name, threshold))
recurse(tree_.children_right[node], depth + 1)
else:
codelines.append( '{}mycat = {}\n'.format(indent, node))
recurse(0, 1)
codelines.append(' catout.append(mycat)\n')
codelines.append(' return pd.DataFrame(catout,index=X_tmp.index,columns=["category"])\n')
codelines.append('node_ids = get_cat(X)\n')
return codelines
mycode = tree_to_code(clf,X.columns.values)
# now execute the function and obtain the dataframe with all nodes
exec(''.join(mycode))
node_ids = [int(x[0]) for x in node_ids.values]
node_ids2 = pd.DataFrame(node_ids)
print('make plot')
import matplotlib.cm as cm
colors = cm.rainbow(np.linspace(0, 1, 1+max( list(set(node_ids)))))
#plt.figure(figsize=cm2inch(24, 21))
for i in list(set(node_ids)):
plt.plot(y[node_ids2.values==i],'o',color=colors[i], label=str(i))
mytitle = ['y colored by node']
plt.title(mytitle ,fontsize=14)
plt.xlabel('my xlabel')
plt.ylabel(tagname)
plt.xticks(rotation=70)
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=9)
plt.tight_layout()
plt.show()
plt.close
not the most elegant version but it does the job...
Now you can use export_text.
from sklearn.tree import export_text
r = export_text(loan_tree, feature_names=(list(X_train.columns)))
print(r)
A complete example from [sklearn][1]
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_text
iris = load_iris()
X = iris['data']
y = iris['target']
decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
decision_tree = decision_tree.fit(X, y)
r = export_text(decision_tree, feature_names=iris['feature_names'])
print(r)
Here is a function, printing rules of a scikit-learn decision tree under python 3 and with offsets for conditional blocks to make the structure more readable:
def print_decision_tree(tree, feature_names=None, offset_unit=' '):
'''Plots textual representation of rules of a decision tree
tree: scikit-learn representation of tree
feature_names: list of feature names. They are set to f1,f2,f3,... if not specified
offset_unit: a string of offset of the conditional block'''
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
value = tree.tree_.value
if feature_names is None:
features = ['f%d'%i for i in tree.tree_.feature]
else:
features = [feature_names[i] for i in tree.tree_.feature]
def recurse(left, right, threshold, features, node, depth=0):
offset = offset_unit*depth
if (threshold[node] != -2):
print(offset+"if ( " + features[node] + " <= " + str(threshold[node]) + " ) {")
if left[node] != -1:
recurse (left, right, threshold, features,left[node],depth+1)
print(offset+"} else {")
if right[node] != -1:
recurse (left, right, threshold, features,right[node],depth+1)
print(offset+"}")
else:
print(offset+"return " + str(value[node]))
recurse(left, right, threshold, features, 0,0)
You can also make it more informative by distinguishing it to which class it belongs or even by mentioning its output value.
def print_decision_tree(tree, feature_names, offset_unit=' '):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
value = tree.tree_.value
if feature_names is None:
features = ['f%d'%i for i in tree.tree_.feature]
else:
features = [feature_names[i] for i in tree.tree_.feature]
def recurse(left, right, threshold, features, node, depth=0):
offset = offset_unit*depth
if (threshold[node] != -2):
print(offset+"if ( " + features[node] + " <= " + str(threshold[node]) + " ) {")
if left[node] != -1:
recurse (left, right, threshold, features,left[node],depth+1)
print(offset+"} else {")
if right[node] != -1:
recurse (left, right, threshold, features,right[node],depth+1)
print(offset+"}")
else:
#print(offset,value[node])
#To remove values from node
temp=str(value[node])
mid=len(temp)//2
tempx=[]
tempy=[]
cnt=0
for i in temp:
if cnt<=mid:
tempx.append(i)
cnt+=1
else:
tempy.append(i)
cnt+=1
val_yes=[]
val_no=[]
res=[]
for j in tempx:
if j=="[" or j=="]" or j=="." or j==" ":
res.append(j)
else:
val_no.append(j)
for j in tempy:
if j=="[" or j=="]" or j=="." or j==" ":
res.append(j)
else:
val_yes.append(j)
val_yes = int("".join(map(str, val_yes)))
val_no = int("".join(map(str, val_no)))
if val_yes>val_no:
print(offset,'\033[1m',"YES")
print('\033[0m')
elif val_no>val_yes:
print(offset,'\033[1m',"NO")
print('\033[0m')
else:
print(offset,'\033[1m',"Tie")
print('\033[0m')
recurse(left, right, threshold, features, 0,0)
Here is my approach to extract the decision rules in a form that can be used in directly in sql, so the data can be grouped by node. (Based on the approaches of previous posters.)
The result will be subsequent CASE
clauses that can be copied to an sql statement, ex.
SELECT COALESCE(*CASE WHEN <conditions> THEN > <NodeA>*, > *CASE WHEN
<conditions> THEN <NodeB>*, > ....)NodeName,* > FROM <table or view>
import numpy as np
import pickle
feature_names=.............
features = [feature_names[i] for i in range(len(feature_names))]
clf= pickle.loads(trained_model)
impurity=clf.tree_.impurity
importances = clf.feature_importances_
SqlOut=""
#global Conts
global ContsNode
global Path
#Conts=[]#
ContsNode=[]
Path=[]
global Results
Results=[]
def print_decision_tree(tree, feature_names, offset_unit='' ''):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
value = tree.tree_.value
if feature_names is None:
features = [''f%d''%i for i in tree.tree_.feature]
else:
features = [feature_names[i] for i in tree.tree_.feature]
def recurse(left, right, threshold, features, node, depth=0,ParentNode=0,IsElse=0):
global Conts
global ContsNode
global Path
global Results
global LeftParents
LeftParents=[]
global RightParents
RightParents=[]
for i in range(len(left)): # This is just to tell you how to create a list.
LeftParents.append(-1)
RightParents.append(-1)
ContsNode.append("")
Path.append("")
for i in range(len(left)): # i is node
if (left[i]==-1 and right[i]==-1):
if LeftParents[i]>=0:
if Path[LeftParents[i]]>" ":
Path[i]=Path[LeftParents[i]]+" AND " +ContsNode[LeftParents[i]]
else:
Path[i]=ContsNode[LeftParents[i]]
if RightParents[i]>=0:
if Path[RightParents[i]]>" ":
Path[i]=Path[RightParents[i]]+" AND not " +ContsNode[RightParents[i]]
else:
Path[i]=" not " +ContsNode[RightParents[i]]
Results.append(" case when " +Path[i]+" then ''" +"{:4d}".format(i)+ " "+"{:2.2f}".format(impurity[i])+" "+Path[i][0:180]+"''")
else:
if LeftParents[i]>=0:
if Path[LeftParents[i]]>" ":
Path[i]=Path[LeftParents[i]]+" AND " +ContsNode[LeftParents[i]]
else:
Path[i]=ContsNode[LeftParents[i]]
if RightParents[i]>=0:
if Path[RightParents[i]]>" ":
Path[i]=Path[RightParents[i]]+" AND not " +ContsNode[RightParents[i]]
else:
Path[i]=" not "+ContsNode[RightParents[i]]
if (left[i]!=-1):
LeftParents[left[i]]=i
if (right[i]!=-1):
RightParents[right[i]]=i
ContsNode[i]= "( "+ features[i] + " <= " + str(threshold[i]) + " ) "
recurse(left, right, threshold, features, 0,0,0,0)
print_decision_tree(clf,features)
SqlOut=""
for i in range(len(Results)):
SqlOut=SqlOut+Results[i]+ " end,"+chr(13)+chr(10)
I needed a more human-friendly format of rules from the Decision Tree. I'm building open-source AutoML Python package and many times MLJAR users want to see the exact rules from the tree.
That's why I implemented a function based on paulkernfeld
answer.
def get_rules(tree, feature_names, class_names):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
for i in tree_.feature
]
paths = []
path = []
def recurse(node, path, paths):
if tree_.feature[node] != _tree.TREE_UNDEFINED:
name = feature_name[node]
threshold = tree_.threshold[node]
p1, p2 = list(path), list(path)
p1 += [f"({name} <= {np.round(threshold, 3)})"]
recurse(tree_.children_left[node], p1, paths)
p2 += [f"({name} > {np.round(threshold, 3)})"]
recurse(tree_.children_right[node], p2, paths)
else:
path += [(tree_.value[node], tree_.n_node_samples[node])]
paths += [path]
recurse(0, path, paths)
# sort by samples count
samples_count = [p[-1][1] for p in paths]
ii = list(np.argsort(samples_count))
paths = [paths[i] for i in reversed(ii)]
rules = []
for path in paths:
rule = "if "
for p in path[:-1]:
if rule != "if ":
rule += " and "
rule += str(p)
rule += " then "
if class_names is None:
rule += "response: "+str(np.round(path[-1][0][0][0],3))
else:
classes = path[-1][0][0]
l = np.argmax(classes)
rule += f"class: {class_names[l]} (proba: {np.round(100.0*classes[l]/np.sum(classes),2)}%)"
rule += f" | based on {path[-1][1]:,} samples"
rules += [rule]
return rules
The rules are sorted by the number of training samples assigned to each rule. For each rule, there is information about the predicted class name and probability of prediction for classification tasks. For the regression task, only information about the predicted value is printed.
from sklearn import datasets
from sklearn.tree import DecisionTreeRegressor
from sklearn import tree
# Prepare the data data
boston = datasets.load_boston()
X = boston.data
y = boston.target
# Fit the regressor, set max_depth = 3
regr = DecisionTreeRegressor(max_depth=3, random_state=1234)
model = regr.fit(X, y)
# Print rules
rules = get_rules(regr, boston.feature_names, None)
for r in rules:
print(r)
The printed rules:
if (RM <= 6.941) and (LSTAT <= 14.4) and (DIS > 1.385) then response: 22.905 | based on 250 samples
if (RM <= 6.941) and (LSTAT > 14.4) and (CRIM <= 6.992) then response: 17.138 | based on 101 samples
if (RM <= 6.941) and (LSTAT > 14.4) and (CRIM > 6.992) then response: 11.978 | based on 74 samples
if (RM > 6.941) and (RM <= 7.437) and (NOX <= 0.659) then response: 33.349 | based on 43 samples
if (RM > 6.941) and (RM > 7.437) and (PTRATIO <= 19.65) then response: 45.897 | based on 29 samples
if (RM <= 6.941) and (LSTAT <= 14.4) and (DIS <= 1.385) then response: 45.58 | based on 5 samples
if (RM > 6.941) and (RM <= 7.437) and (NOX > 0.659) then response: 14.4 | based on 3 samples
if (RM > 6.941) and (RM > 7.437) and (PTRATIO > 19.65) then response: 21.9 | based on 1 samples
I've summarized the ways to extract rules from the Decision Tree in my article: Extract Rules from Decision Tree in 3 Ways with Scikit-Learn and Python.
Modified Zelazny7's code to fetch SQL from the decision tree.
# SQL from decision tree
def get_lineage(tree, feature_names):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
le='<='
g ='>'
# get ids of child nodes
idx = np.argwhere(left == -1)[:,0]
def recurse(left, right, child, lineage=None):
if lineage is None:
lineage = [child]
if child in left:
parent = np.where(left == child)[0].item()
split = 'l'
else:
parent = np.where(right == child)[0].item()
split = 'r'
lineage.append((parent, split, threshold[parent], features[parent]))
if parent == 0:
lineage.reverse()
return lineage
else:
return recurse(left, right, parent, lineage)
print 'case '
for j,child in enumerate(idx):
clause=' when '
for node in recurse(left, right, child):
if len(str(node))<3:
continue
i=node
if i[1]=='l': sign=le
else: sign=g
clause=clause+i[3]+sign+str(i[2])+' and '
clause=clause[:-4]+' then '+str(j)
print clause
print 'else 99 end as clusters'
Apparently a long time ago somebody already decided to try to add the following function to the official scikit's tree export functions (which basically only supports export_graphviz)
def export_dict(tree, feature_names=None, max_depth=None) :
"""Export a decision tree in dict format.
Here is his full commit:
Not exactly sure what happened to this comment. But you could also try to use that function.
I think this warrants a serious documentation request to the good people of scikit-learn to properly document the sklearn.tree.Tree
API which is the underlying tree structure that DecisionTreeClassifier
exposes as its attribute tree_
.
Just use the function from sklearn.tree like this
from sklearn.tree import export_graphviz
export_graphviz(tree,
out_file = "tree.dot",
feature_names = tree.columns) //or just ["petal length", "petal width"]
And then look in your project folder for the file tree.dot, copy the ALL the content and paste it here http://www.webgraphviz.com/ and generate your graph :)
Thank for the wonderful solution of @paulkerfeld. On top of his solution, for all those who want to have a serialized version of trees, just use tree.threshold
, tree.children_left
, tree.children_right
, tree.feature
and tree.value
. Since the leaves don't have splits and hence no feature names and children, their placeholder in tree.feature
and tree.children_***
are _tree.TREE_UNDEFINED
and _tree.TREE_LEAF
. Every split is assigned a unique index by depth first search
.
Notice that the tree.value
is of shape [n, 1, 1]
Here is a function that generates Python code from a decision tree by converting the output of export_text
:
import string
from sklearn.tree import export_text
def export_py_code(tree, feature_names, max_depth=100, spacing=4):
if spacing < 2:
raise ValueError('spacing must be > 1')
# Clean up feature names (for correctness)
nums = string.digits
alnums = string.ascii_letters + nums
clean = lambda s: ''.join(c if c in alnums else '_' for c in s)
features = [clean(x) for x in feature_names]
features = ['_'+x if x[0] in nums else x for x in features if x]
if len(set(features)) != len(feature_names):
raise ValueError('invalid feature names')
# First: export tree to text
res = export_text(tree, feature_names=features,
max_depth=max_depth,
decimals=6,
spacing=spacing-1)
# Second: generate Python code from the text
skip, dash = ' '*spacing, '-'*(spacing-1)
code = 'def decision_tree({}):\n'.format(', '.join(features))
for line in repr(tree).split('\n'):
code += skip + "# " + line + '\n'
for line in res.split('\n'):
line = line.rstrip().replace('|',' ')
if '<' in line or '>' in line:
line, val = line.rsplit(maxsplit=1)
line = line.replace(' ' + dash, 'if')
line = '{} {:g}:'.format(line, float(val))
else:
line = line.replace(' {} class:'.format(dash), 'return')
code += skip + line + '\n'
return code
Sample usage:
res = export_py_code(tree, feature_names=names, spacing=4)
print (res)
Sample output:
def decision_tree(f1, f2, f3):
# DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=3,
# max_features=None, max_leaf_nodes=None,
# min_impurity_decrease=0.0, min_impurity_split=None,
# min_samples_leaf=1, min_samples_split=2,
# min_weight_fraction_leaf=0.0, presort=False,
# random_state=42, splitter='best')
if f1 <= 12.5:
if f2 <= 17.5:
if f1 <= 10.5:
return 2
if f1 > 10.5:
return 3
if f2 > 17.5:
if f2 <= 22.5:
return 1
if f2 > 22.5:
return 1
if f1 > 12.5:
if f1 <= 17.5:
if f3 <= 23.5:
return 2
if f3 > 23.5:
return 3
if f1 > 17.5:
if f1 <= 25:
return 1
if f1 > 25:
return 2
The above example is generated with names = ['f'+str(j+1) for j in range(NUM_FEATURES)]
.
One handy feature is that it can generate smaller file size with reduced spacing. Just set spacing=2
.
From this answer, you get a readable and efficient representation: https://stackoverflow.com/a/65939892/3746632
Output looks like this. X is 1d vector to represent a single instance's features.
from numba import jit,njit
@njit
def predict(X):
ret = 0
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
ret += 1
else: # if w_pizza > 0.5
pass
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
pass
else: # if w_pizza > 0.5
ret += 1
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
ret += 1
else: # if w_mexico > 0.5
ret += 1
else: # if w_pizza > 0.5
pass
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
ret += 1
else: # if w_mexico > 0.5
pass
else: # if w_pizza > 0.5
ret += 1
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
pass
else: # if w_pizza > 0.5
pass
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
ret += 1
else: # if w_pizza > 0.5
ret += 1
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
pass
else: # if w_pizza > 0.5
ret += 1
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
pass
else: # if w_pizza > 0.5
pass
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
pass
else: # if w_pizza > 0.5
pass
if X[0] <= 0.5: # if w_pizza <= 0.5
if X[1] <= 0.5: # if w_mexico <= 0.5
if X[2] <= 0.5: # if w_reusable <= 0.5
ret += 1
else: # if w_reusable > 0.5
pass
else: # if w_mexico > 0.5
pass
else: # if w_pizza > 0.5
pass
return ret/10