0
votes

I understood the general formula:

P(i | x) =  (p(i)p(x|i))/(sum(p(j)(p(x|j))

But I cannot successfully apply it to this exercise:

Consider the data sets for two classes X1 = {(0,0)} and X2 = {(1,0), (0,1)}. Which classification probabilities will a naive Bayes classifier produce for the feature vector (0,0)?

I can't understand what p(1) and p((0,0)|1) would be in this case.

1
There are plenty of books and tutorials that explain the Naive Bayes Classificator. Why don't you read the explanations given by professional teachers instead of expecing some random internet user to explain your homework to you? - Has QUIT--Anony-Mousse
Because all the books and "explanations given by professional teachers" use different cases, like real world ones (illness, email spams). I cannot apply it to this matrix example. - Ambi
I actually agree the question isn't very well written. I would guess X1 are the training examples for category 1, and X2 the examples for category 2. This makes p(1) the prior probability of category 1 and p((0,0)|1) the likelihood. And I'm guessing this should have a homework tag - Ben Allison
There is no matrix in your question. X1 contains one example (x,y): (0,0) and X2 contains two training examples. I see training data with 2 classes (X1 and X2), 3 instances total, and 2 attributes. I have to agree that the question is using a different syntax than most books, but so what. I would again use a different syntax in an answer. - Has QUIT--Anony-Mousse
It's not homework, I am prepairing for an exam and this was given as example with the solution p(1 | (0,0)) = 2/3 .. I wanted just to figure how to get that answer - Ambi

1 Answers

0
votes

The naive Bayes classifier is not the Bayes formula! Those are two completely different concepts!