4
votes

I am testing a Sentiment Analysis model using NLTK. I need to add a Confusion Matrix to the classifier results and if possible also Precision, Recall and F-Measure values. I have only accuracy so far. Movie_reviews data has pos and neg labels. However to train the classifier I am using "featuresets" that has a different format from the usual (sentence, label) structure. I am not sure if I can use confusion_matrix from sklearn, after training the classifier by "featuresets"

import nltk
import random
from nltk.corpus import movie_reviews

documents = [(list(movie_reviews.words(fileid)), category)
             for category in movie_reviews.categories()
             for fileid in movie_reviews.fileids(category)]

random.shuffle(documents)

all_words = []

for w in movie_reviews.words():
    all_words.append(w.lower())

all_words = nltk.FreqDist(all_words)

word_features = list(all_words.keys())[:3000]

def find_features(document):
    words = set(document)
    features = {}
    for w in word_features:
        features[w] = (w in words)

    return features


featuresets = [(find_features(rev), category) for (rev, category) in documents]

training_set = featuresets[:1900]
testing_set =  featuresets[1900:]


classifier = nltk.NaiveBayesClassifier.train(training_set)


print("Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100)
1

1 Answers

5
votes

First you can classify all test values and store predicted outcomes and gold results in a list.

Then, you can use nltk.ConfusionMatrix.

test_result = []
gold_result = []

for i in range(len(testing_set)):
    test_result.append(classifier.classify(testing_set[i][0]))
    gold_result.append(testing_set[i][1])

Now, You can calculate different metrics.

CM = nltk.ConfusionMatrix(gold_result, test_result)
print(CM)

print("Naive Bayes Algo accuracy percent:"+str((nltk.classify.accuracy(classifier, testing_set))*100)+"\n")

labels = {'pos', 'neg'}

from collections import Counter
TP, FN, FP = Counter(), Counter(), Counter()
for i in labels:
    for j in labels:
        if i == j:
            TP[i] += int(CM[i,j])
        else:
            FN[i] += int(CM[i,j])
            FP[j] += int(CM[i,j])

print("label\tprecision\trecall\tf_measure")
for label in sorted(labels):
    precision, recall = 0, 0
    if TP[label] == 0:
        f_measure = 0
    else:
        precision = float(TP[label]) / (TP[label]+FP[label])
        recall = float(TP[label]) / (TP[label]+FN[label])
        f_measure = float(2) * (precision * recall) / (precision + recall)
    print(label+"\t"+str(precision)+"\t"+str(recall)+"\t"+str(f_measure))

You can check - how to calculate precision and recall here.

You can also use : sklearn.metrics for these calculations using gold_result and test_result values.

from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix   

print '\nClasification report:\n', classification_report(gold_result, test_result)
print '\nConfussion matrix:\n',confusion_matrix(gold_result, test_result)