I'm trying to create a decision tree for classification but it doesn't get created. The same data performs with 0.85 accuracy using a SVM (train == test data), "play" is the target...
Any idea what I'm doing wrong? Here is the data and code: https://gist.github.com/romeokienzler/c471819cbf156a69f73daf49f8c700c6
outlook,temp,humidity,windy,play
sunny,hot,high,false,no
sunny,hot,high,true,no
overcast,hot,high,false,yes
rainy,mild,high,false,yes
rainy,cool,normal,false,yes
rainy,cool,normal,true,no
overcast,cool,normal,true,yes
sunny,mild,high,false,no
sunny,cool,normal,false,yes
rainy,mild,normal,false,yes
sunny,mild,normal,true,yes
overcast,mild,high,true,yes
overcast,hot,normal,false,yes
rainy,mild,high,true,no
For using the SVM I've encoded the data: https://gist.github.com/romeokienzler/9bfce4182eda3d7662315621462c9cc6
outlook,temp,humidity,windy,play
1,1,2,FALSE,FALSE
1,1,2,TRUE,FALSE
2,1,2,FALSE,TRUE
3,2,2,FALSE,TRUE
3,3,1,FALSE,TRUE
3,3,1,TRUE,FALSE
2,3,1,TRUE,TRUE
1,2,2,FALSE,FALSE
1,3,1,FALSE,TRUE
3,2,1,FALSE,TRUE
1,2,1,TRUE,TRUE
2,2,2,TRUE,TRUE
2,1,1,FALSE,TRUE
3,2,2,TRUE,FALSE
This is the SVM case:
library(e1071)
df= read.csv("5.tennis_encoded.csv")
attach(df)
x <- subset(df, select=-play)
y <- play
detach(df)
model = svm(x,y,type = "C")
pred = predict(model,x)
truthVector = pred == y
good = length(truthVector[truthVector==TRUE])
bad = length(truthVector[truthVector==FALSE])
good/(good+bad)
[1] 0.8571429
And this one for the Decision Tree
df= read.csv("5.tennis_encoded.csv")
library(rpart)
model = rpart(play ~ .,method = "class", data=df)
print(model)
1) root 14 5 TRUE (0.3571429 0.6428571) *
So I basically get a tree with only a root and 0.64% probability for play == yes
Any ideas what I'm doing wrong?