0
votes

I followed the tutorial here: https://towardsdatascience.com/creating-the-twitter-sentiment-analysis-program-in-python-with-naive-bayes-classification-672e5589a7ed to create a twitter sentiment analyser, which uses naive bayes classifier from the nltk library as a way to classify tweets as either positive, negative or neutral but the labels it gives back are only neutral or irrelevant. I've included my code below as I'm not very experienced with any machine learning so I'd appreciate any help.

I've tried using different sets of tweets to classify, even when specifying a search keyword like 'happy' it will still return 'neutral'. I don't b

import nltk

def buildvocab(processedtrainingdata):
    all_words = []

    for (words, sentiment) in processedtrainingdata:
        all_words.extend(words)

    wordlist = nltk.FreqDist(all_words)
    word_features = wordlist.keys()

    return word_features

def extract_features(tweet):
    tweet_words = set(tweet)
    features = {}
    for word in word_features:
        features['contains(%s)' % word] = (word in tweet_words) #creates json key containing word x, its loc.
        # Every key has a T/F according - true for present , false for not
    return features 

# Building the feature vector

word_features = buildvocab(processedtrainingdata)
training_features = nltk.classify.apply_features(extract_features, processedtrainingdata)
# apply features does the actual extraction

Nbayes_result_labels = [Nbayes.classify(extract_features(tweet[0])) for tweet in processedtestset]

# get the majority vote [?]
if Nbayes_result_labels.count('positive') > Nbayes_result_labels.count('negative'):
    print('Positive')
    print(str(100*Nbayes_result_labels.count('positive')/len(Nbayes_result_labels)))
elif Nbayes_result_labels.count('negative') > Nbayes_result_labels.count('positive'):
    print(str(100*Nbayes_result_labels.count('negative')/len(Nbayes_result_labels)))
    print('Negative sentiment')
else:
    print('Neutral')


#the output is always something like this: 
print(Nbayes_result_labels)
['neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'irrelevant', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral']

1
I should have mentioned the 'corpusfile' of tweets used to train the data contains 550 ish positive labelled tweets and an equal amount for negative but neutral and irrelevant are far higher with roughly 4000 being so. - Zac
Also, I tried classifying the training data set and it does work for it i.e replacing Nbayes_result_labels = [Nbayes.classify(extract_features(tweet[0])) for tweet in processedtestset] with Nbayes_result_labels = [Nbayes.classify(extract_features(tweet[0])) for tweet in processedtrainingdata] - Zac

1 Answers

0
votes

Your dataset is highly imbalanced. You yourself mentioned it in one of the comment, you have 550 positive and 550 negative labelled tweets but 4000 neutral that's why it always favours the majority class. You should have equal number of utterances for all classes if possible. You also need to learn about evaluation metrics, then you'll see most probably your recall is not good. An ideal model should stand good on all evaluation metrics. To avoid overfitting some people also add a fourth 'others' class as well but for now you can skip that.

Here's something you can do to improve performance of your model, either (add more data) oversample the minority classes by adding possible similar utterances or undersample the majority class or use a combination of both. You can read about oversampling, undersampling online.

In this new datset try to have utterances of all classes in this ratio 1:1:1 if possible. Finally try other algos as well with hyperparameters tuned through grid search,random search or tpot.

edit: in your case irrelevant is the 'others' class so you now have 4 classes try to have dataset in this ratio 1:1:1:1 for each class.