6
votes

I'm having some very annoying problems getting a Naive Bayes Classifier to work with a document term matrix. I'm sure I'm making a very simple mistake but can't figure out what it is. My data is from accounts spreadsheets. I've been asked to figure out which categories (in text format: mostly names of departments or names of budgets) are more likely to spend money on charities and which ones mostly (or only) spend on private companies. They suggested I use Naive Bayes classifiers to do this. I have a thousand or so rows of data to train a model and many hundreds of thousands of rows to test the model against. I have prepared the strings, replacing spaces with underscores and ands/&s with +, then treated each category as one term: so 'alcohol and drug addiction' becomes: alcohol+drug_addiction.

Some example rows:

"environment+housing strategy+commissioning third_party_payments supporting_ppl_block_gross_chargeable" -> This row went to a charity
"west_north_west customer+tenancy premises h.r.a._special_maintenance" -> This row went to a private company.

Using this example as a template, I wrote the following function to come up with my document term matrix (using tm), both for training and test data.

library(tm)
library(e1071) 

getMatrix <- function(chrVect){
    testsource <- VectorSource(chrVect)
    testcorpus <- Corpus(testsource)
    testcorpus <- tm_map(testcorpus,stripWhitespace)
    testcorpus <- tm_map(testcorpus, removeWords,stopwords("english"))
    testmatrix <- t(TermDocumentMatrix(testcorpus))
}

trainmatrix <- getMatrix(traindata$cats)
testmatrix <- getMatrix(testdata$cats)

So far, so good. The problem is when I try to a) apply a Naive Bayes model and b) predict from that model. Using klar package - I get a zero probability error, since many of the terms have zero instances of one category and playing around with the laplace terms does not seem to fix this. Using e1071, the model worked, but then when I tested the model using:

model <- naiveBayes(as.matrix(trainmatrix),as.factor(traindata$Code))
rs<- predict(model, as.matrix(testdata$cats))

... every single item predicted the same category, even though they should be roughly equal. Something in the model clearly isn't working. Looking at some of the terms in model$tables - I can see that many have high values for private and zero for charity and others vice versa. I have used as.factor for the code.

output:
rs   1  2
  1  0  0
  2 19  17

Any ideas on what is going wrong? Do dtm matrices not play nice with naivebayes? Have I missed a step out in preparing the data? I'm completely out of ideas. Hope this is all clear. Happy to clarify if not. Any suggestions would be much appreciated.

1
I'm having an identical issue with a multinomial naive bayes classifier. Some data gets assigned correctly, but the majority ends up in one class. The class that gets incorrectly filled switches depending on the size of the input data. I've also tried repeating the data to avoid this issue, but the majority still end up in one classchristopherlovell

1 Answers

3
votes

I have already had the problem myself. You have done (as far as I see it) everything right, the Naive Bayes Implementation in e1071 (and thus klar) is buggy.

But there is an easy and quick fix so that Naive Bayes as implemented in e1071 works again: You should change your text-vectors to categorial variables, i.e. as.factor. You have already done this with your target variable traindata$Code, yet you have to also do this for your trainmatrix and for sure then your testdata.

I could not track the bug to 100% percent down, but it lies in this part in the naive bayes implementation from e1071 (I may note, klar is only a wrapper around e1071):

L <- log(object$apriori) + apply(log(sapply(seq_along(attribs),
            function(v) {
                nd <- ndata[attribs[v]]
                ## nd is now a cell, row i, column attribs[v]
                if (is.na(nd) || nd == 0) {
                    rep(1, length(object$apriori))
                } else {
                    prob <- if (isnumeric[attribs[v]]) {
                        ## we select table for attribute
                        msd <- object$tables[[v]]
                        ## if stddev is eqlt eps, assign threshold
                        msd[, 2][msd[, 2] <= eps] <- threshold
                        dnorm(nd, msd[, 1], msd[, 2])
                    } else {
                        object$tables[[v]][, nd]
                    }
                    prob[prob <= eps] <- threshold
                    prob
                }
            })), 1, sum)

You see that there is an if-else-condition: if we have no numerics, naive bayes is used as we expect it to work. If we have numerics - and here comes the bug - this naive bayes automatically assumes a normal distribution. If you only have 0 and 1 in your text, dnorm pretty much sucks. I assume due to very low values created by dnorm the prob. are always replaced by the threshold and thus the variable with the higher a priori factor will always „win“.

However, if I understand your problem correct, you do not even need prediction, rather the a priori factor for identifying which department gives money to whom. Then all you have to do is have a deep look at your model. In your model for every term there appears the apriori probability, which is what I assume you are looking for. Let's do this and the aforementioned with a slightly modified version of your sample:

## i have changed the vectors slightly
first <- "environment+housing strategy+commissioning third_party_payments supporting_ppl_block_gross_chargeable"
second <- "west_north_west customer+tenancy premises h.r.a._special_maintenance"

categories <- c("charity", "private")

library(tm)
library(e1071)

getMatrix <- function(chrVect){
    testsource <- VectorSource(chrVect)
    testcorpus <- Corpus(testsource)
    testcorpus <- tm_map(testcorpus,stripWhitespace)
    testcorpus <- tm_map(testcorpus, removeWords,stopwords("english"))
    ## testmatrix <- t(TermDocumentMatrix(testcorpus))
    ## instead just use DocumentTermMatrix, the assignment is superflous
    return(DocumentTermMatrix(testcorpus))
}

## since you did not supply some more data, I cannot do anything about these lines
## trainmatrix <- getMatrix(traindata$cats)
## testmatrix <- getMatrix(testdata$cats)
## instead only
trainmatrix <- getMatrix(c(first, second))

## I prefer running this instead of as.matrix as i can add categories more easily
traindf <- data.frame(categories, as.data.frame(inspect(trainmatrix)))

## now transform everything to a character vector since factors produce an error
for (cols in names(traindf[-1])) traindf[[cols]] <- factor(traindf[[cols]])
## traindf <- apply(traindf, 2, as.factor) did not result in factors

## check if it's as we wished
str(traindf)

## it is
## let's create a model  (with formula syntax)
model <- naiveBayes(categories~., data=traindf)

## if you look at the output (doubled to see it more clearly)
predict(model, newdata=rbind(traindf[-1], traindf[-1]))

But as I have already said, you do not need to predict. A look at the model is all right, e.g. model$tables$premises will give you the likelihood for the premises giving money to private corporations: 100 %.

If you are dealing with very large datasets, you should specify threshold and eps in your model. Eps defines the limit, when the threshold should be supplied. E.g. eps = 0 and threshold = 0.000001 can be of use.

Furthermore you should stick to using term-frequency weighting. tf*idv e.g. will not work due to the dnorm in the naive bayes.

Hope I can finally get my 50 reputation :P