0
votes

I'm relatively new to scikit learn/machine learning. I have to create a decision tree using the Titanic dataset, and it needs to use KFold cross validation with 5 folds. Here's what I have so far:

cv = KFold(n_splits=5)

tree_model = tree.DecisionTreeClassifier(max_depth=3)
print(titanic_train.describe())
fold_accuracy = []
for train_index, valid_index in cv.split(X_train):
    train_x,test_x = X_train.iloc[train_index],X_train.iloc[valid_index]
    train_y,test_y= y_train.iloc[train_index], y_train.iloc[valid_index]

    model = tree_model.fit(train_x,train_y)
    valid_acc = model.score(test_x,test_y)
    fold_accuracy.append(valid_acc)
    print(confusion_matrix(y_test,model.predict(X_test)))

print("Accuracy per fold: ", fold_accuracy, "\n")
print("Average accuracy: ", sum(fold_accuracy)/len(fold_accuracy))
dot_data = StringIO()

my question is, does my fitted model only exist within the loop? I need to accurately predict from a test training set provided where "Survived" is unlabeled (in the confusion matrix, X_Test is the test data set X values and y_test is the actual survival rate), and I'm unsure that by training using this method, that my main classifier (tree_model) is being trained using each set in the fold.

1

1 Answers

1
votes

You seem to be retraining your model at every iteration. There is only one model instance, that you created as tree_model. Then you create another pointer to that same instance called model at each iteration.

Check out the grid search functionality in sklearn: http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html