1
votes

I have used back propagation on a dataset which has 3 classes: L, B, R. I have also made a confusion matrix after making the neural network.

Actual class array:

sample_test = array([0, 1, 0, 2, 0, 2, 1, 1, 0, 1, 1, 1], dtype=int64)

Predicted class array:

yp = array([0, 1, 0, 2, 0, 2, 0, 1, 0, 1, 1, 1], dtype=int64)

Code for confusion matrix:

from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels

class_names = ['B','R','L']

def plot_confusion_matrix(y_true, y_pred, classes,
                          normalize=False,
                          title=None,
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """
    if not title:
        if normalize:
            title = 'Normalized confusion matrix'
        else:
            title = 'Confusion matrix, without normalization'

    # Compute confusion matrix
    cm = confusion_matrix(y_true, y_pred)
    # Only use the labels that appear in the data
    classes = [0, 1, 2]
    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        print("Normalized confusion matrix")
    else:
        print('Confusion matrix, without normalization')

    print(cm)

    fig, ax = plt.subplots()
    im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
    ax.figure.colorbar(im, ax=ax)
    # We want to show all ticks...
    ax.set(xticks=np.arange(cm.shape[1]),
           yticks=np.arange(cm.shape[0]),
           # ... and label them with the respective list entries
           xticklabels=classes, yticklabels=classes,
           title=title,
           ylabel='True label',
           xlabel='Predicted label')

    # Rotate the tick labels and set their alignment.
    plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
             rotation_mode="anchor")

    # Loop over data dimensions and create text annotations.
    fmt = '.2f' if normalize else 'd'
    thresh = cm.max() / 2.
    for i in range(cm.shape[0]):
        for j in range(cm.shape[1]):
            ax.text(j, i, format(cm[i, j], fmt),
                    ha="center", va="center",
                    color="white" if cm[i, j] > thresh else "black")
    fig.tight_layout()
    return ax

np.set_printoptions(precision=2)

# Plot non-normalized confusion matrix
plot_confusion_matrix(sample_test, yp, classes=class_names, 
                      title='Confusion matrix, without normalization')

# Plot normalized confusion matrix
plot_confusion_matrix(sample_test, yp, classes=class_names , normalize=True,
                      title='Normalized confusion matrix')

plt.show()

Output:

enter image description here enter image description here

Now I want to plot ROC curve for this and calculate MAUC. I saw the documentation but cant understand properly what to do.

I will be very grateful, if anyone can help me by giving some suggestions how to do that. Thanks in advance.

1

1 Answers

0
votes

The ROC is calculated per class - treat each class as the "positive" class and the other classes as the "negative" classes. Note - first you will have to use predict_proba() - to get the predicted probability per class. Something like this:

import seaborn as sns
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import roc_auc_score

iris = sns.load_dataset('iris')
X = iris.drop('species',axis=1)
y = iris['species']
X_train, X_test, y_train, y_test = train_test_split(X,y)

le = preprocessing.LabelEncoder()
le.fit(y_train)
le.transform(y_train)

model = DecisionTreeClassifier(max_depth=1)
model.fit(X_train,le.transform(y_train))

predictions =pd.DataFrame(model.predict_proba(X_test),columns=list(le.inverse_transform(model.classes_)))

print(roc_auc_score((y_test == 'versicolor').astype(float), predictions['versicolor']))