1
votes

I am trying to implement Naive Bayes Classifier using a dataset published by UCI machine learning team. I am new to machine learning and trying to understand techniques to use for my work related problems, so I thought it's better to get the theory understood first.

I am using pima dataset (Link to Data - UCI-ML), and my goal is to build Naive Bayes Univariate Gaussian Classifier for K class problem (Data is only there for K=2). I have done splitting data, and calculate the mean for each class, standard deviation, priors for each class, but after this I am kind of stuck because I am not sure what and how I should be doing after this. I have a feeling that I should be calculating posterior probability,

Here is my code, I am using percent as a vector, because I want to see the behavior as I increase the training data size from 80:20 split. Basically if you pass [10 20 30 40] it will take that percentage from 80:20 split, and use 10% of 80% as training.

function[classMean] = naivebayes(file, iter, percent)
dm = load(file);
    for i=1:iter
        idx = randperm(size(dm.data,1))
        %Using same idx for data and labels
        shuffledMatrix_data = dm.data(idx,:);
        shuffledMatrix_label = dm.labels(idx,:);
        percent_data_80 = round((0.8) * length(shuffledMatrix_data));
        %Doing 80-20 split
        train = shuffledMatrix_data(1:percent_data_80,:);
        test = shuffledMatrix_data(percent_data_80+1:length(shuffledMatrix_data),:);
        train_labels = shuffledMatrix_label(1:percent_data_80,:)
        test_labels = shuffledMatrix_data(percent_data_80+1:length(shuffledMatrix_data),:);
        %Getting the array of percents
        for pRows = 1:length(percent)
            percentOfRows = round((percent(pRows)/100) * length(train));
            new_train = train(1:percentOfRows,:)
            new_trin_label = shuffledMatrix_label(1:percentOfRows)
            %get unique labels in training
            numClasses = size(unique(new_trin_label),1)
            classMean = zeros(numClasses,size(new_train,2));
            for kclass=1:numClasses
                classMean(kclass,:) = mean(new_train(new_trin_label == kclass,:))
                std(new_train(new_trin_label == kclass,:))
                priorClassforK = length(new_train(new_trin_label == kclass))/length(new_train)
                priorClassforK_1 = 1 - priorClassforK
            end
        end
    end
end
1

1 Answers

2
votes

First, compute the probability of evey class label based on frequency counts. For a given sample of data and a given class in your data set, you compute the probability of evey feature. After that, multiply the conditional probability for all features in the sample by each other and by the probability of the considered class label. Finally, compare values of all class labels and you choose the label of the class with the maximum probability (Bayes classification rule).

For computing conditonal probability, you can simply use the Normal distribution function.