8
votes

I am writing a Naive Bayes classifier for performing indoor room localization from WiFi signal strength. So far it is working well, but I have some questions about missing features. This occurs frequently because I use WiFi signals, and WiFi access points are simply not available everywhere.

Question 1: Suppose I have two classes, Apple and Banana, and I want to classify test instance T1 as below.

enter image description here

I fully understand how the Naive Bayes classifier works. Below is the formula I am using from Wikipedia's article on the classifier. I am using uniform prior probabilities P(C=c), so I am omitting it in my implementation.

enter image description here

Now, when I compute the right-hand side of the equation and loop over all the class-conditional feature probabilities, which set of features do I use? Test instance T1 uses features 1, 3, and 4, but the two classes do not have all these features. So when I perform my loop to compute the probability product, I see several choices on what I'm looping over:

  1. Loop over the union of all features from training, namely features 1, 2, 3, 4. Since the test instance T1 does not have feature 2, then use an artificial tiny probability.
  2. Loop over only features of the test instance, namely 1, 3, and 4.
  3. Loop over the features available for each class. To compute class-conditional probability for 'Apple', I would use features 1, 2, and 3, and for 'Banana', I would use 2, 3, and 4.

Which of the above should I use?

Question 2: Let's say I want to classify test instance T2, where T2 has a feature not found in either class. I am using log probabilities to help eliminate underflow, but I am not sure of the details of the loop. I am doing something like this (in Java-like pseudocode):

Double bestLogProbability = -100000;
ClassLabel bestClassLabel = null;

for (ClassLabel classLabel : allClassLabels)
{
    Double logProbabilitySum = 0.0;

    for (Feature feature : allFeatures)
    {
        Double logProbability = getLogProbability(classLabel, feature);

        if (logProbability != null)
        {
            logProbabilitySum += logProbability;
        }
    }

    if (bestLogProbability < logProbability)
    {
        bestLogProbability = logProbabilitySum;
        bestClassLabel = classLabel;
    }
}

The problem is that if none of the classes have the test instance's features (feature 5 in the example), then logProbabilitySum will remain 0.0, resulting in a bestLogProbability of 0.0, or linear probability of 1.0, which is clearly wrong. What's a better way to handle this?

2

2 Answers

6
votes

For the Naive Bayes classifier, the right hand side of your equation should iterate over all attributes. If you have attributes that are sparsely populated, the usual way to handle that is by using an m-estimate of the probability which uses an equivalent sample size to calculate your probabilities. This will prevent the class-conditional probabilities from becoming zero when your training data have a missing attribute value. Do a web search for the two bold terms above and you will find numerous descriptions of the m-estimate formula. A good reference text that describes this is Machine Learning by Tom Mitchell. The basic formula is

P_i = (n_i + m*p_i) / (n + m)

n_i is the number of training instances where the attribute has value f_i, n is the number of training instances (with the current classification), m is the equivalent sample size, and p_i is the prior probability for f_i. If you set m=0, this just reverts to the standard probability values (which may be zero, for missing attribute values). As m becomes very large, P_i approaches p_i (i.e., the probability is dominated by the prior probability). If you don't have a prior probability to use, just make it 1/k, where k is the number of attribute values.

If you use this approach, then for your instance T2, which has no attributes present in the training data, the result will be whichever class occurs most often in the training data. This makes sense since there is no relevant information in the training data by which you could make a better decision.

1
votes

I would be tempted to simply ignore any features not found in all classes at training. If you choose to do otherwise, you're essentially hallucinating data and then treating it equally to data that really existed in the classification step. So my simple answer to question 1 would be to simply make the decision on the basis of feature 3 (you just don't have enough information to do anything else). This is part of what the m estimate mentioned by @bogatron is doing.

There's a more complicated answer to this for classes in training where certain features are missing, but it would take a good deal more work. The m-estimate is really a point estimate of the posterior distribution over p_i (which in your case is mu_i, sigma_i) given your training data, which is composed of the prior on p_i (the fraction n_i / n) and the likelihood function p(data | p_i). In the case where you observe no datapoints, you can essentially revert to the prior for the predictive distribution of that feature.

Now, how do you go about estimating that prior? Well, if the number of classes in the problem is small, relative to the number for which some feature value is missing, you can infer the parameters of the prior from the classes which do have data, and consider the predictive distribution for the classes missing data as simply being this prior (for the classes having data, your predictive distribution is the posterior). Useful pointers for you would be that since you seem to be assuming your data are normally distributed (or at least characterised by their mean and standard deviation), the prior on the mean should also be normal for the sake of conjugacy. I'd probably want to avoid doing inference about the prior distribution of your standard deviations, since this is a bit fiddly if you're new to it.

Note however that this only makes sense if you have enough classes with observations for that feature that the fraction missing values is small. In particular, in your example you only have a single class with observations, so the best you could possibly do for the Feature One in class "Banana" would be to assume uncertainty about mu_1 was represented by a distribution centered around "Apple"'s mu_1 with some arbitrary variance. Or you could assume their mus were equal, in which case it would have no effect on the decision and you might as well have ignored it!

Thus, unfortunately, the answer to your Question 2 is that your code is doing the correct thing. If your new test instance has only features that have never been observed in training, how could you hope to pick a class for it? You can do no better than choose according to the prior.