4
votes

When we train a training set using decision tree classifier, we will get a tree model. And this model can be converted to rules and can be incorporated into a java code. Now if I train the training set using Naive Bayes, in what form is the model? And how can I incorporated the model into my java code? If there is no model resulted from the training, then what is the difference between Naive Bayes and lazy learner (ex. kNN)?

Thanks in advance.

2

2 Answers

1
votes

Naive Bayes constructs estimations of conditional probabilities P(f_1,...,f_n|C_j), where f_i are features and C_j are classes, which, using bayes rule and estimation of priors (P(C_j)) and evidence (P(f_i)) can be translated into x=P(C_j|f_1,...,f_n), which can be roughly read as "Given features f_i I think, that their describe object of class C_j and my certainty is x". In fact, NB assumes that festures are independent, and so it actualy uses simple propabilities in form of x=P(f_i|C_j), so "given f_i I think that it is C_j with probability x".

So the form of the model is set of probabilities:

  • Conditional probabilities P(f_i|C_j) for each feature f_i and each class C_j
  • priors P(C_j) for each class

KNN on the other hand is something completely different. It actually is not a "learned model" in a strict sense, as you don't tune any parameters. It is rather a classification algorithm, which given training set and number k simply answers question "For given point x, what is the major class of k nearest points in the training set?".

The main difference is in the input data - Naive Bayes works on objects that are "observations", so you simply need some features which are present in classified object or absent. It does not matter if it is a color, object on the photo, word in the sentence or an abstract concept in the highly complex topological object. While KNN is a distance-based classifier which requires you to classify object which you can measure distance between. So in order to classify abstract objects you have to first come up with some metric, distance measure, which describes their similarity and the result will be highly dependent on those definitions. Naive Bayes on the other hand is a simple probabilistic model, which does not use the concept of distance at all. It treats all objects in the same way - they are there or they aren't, end of story (of course it can be generalised to the continuous variables with given density function, but it is not the point here).

0
votes

The Naive Bayes will construct/estimate the probability distribution from which your training samples have been generated.

Now, given this probability distribution for all your output classes, you take a test sample, and depending on which class has the highest probability of generating this sample, you assign the test sample to that class.

In short, you take the test sample and run it through all the probability distributions (one for each class) and calculate the probability of generating this test sample for that particular distribution.