1
votes

I understand neural networks with any number of hidden layers can approximate nonlinear function, but can it even predict some special functions, especially do the same as some statistical methods?

Suppose the statistical rule for a classification problem is as follows. For a training set input X_train and output Y_train, we calculate the geometrical average (i.e. center of X_train per a particular class) of X_train belonged to each particular class. Therefore, for each class we have a center of X. Now for the test data, we estimate the class labels by finding the shortest euclidean distance to the trained centers. For example, assuming the training give us centers as following mapping: (-1,1,1)->0, (1,1,1)->1. Then for a test data (-0.8,0.5,1), since it is closer to (-1,1,1), it should belong to class 0.

The problem is I do not know if any supervised learning method can do the above strategies. I would call it as 'supervised k-means'. KNN method is similar but it finds the label based on N-nearest-points rather than the average of all training points.

I am wondering if neural networks can do this. Or do I miss other learning techniques that can actually do the above strategy? What if the statistical strategy that I try to learn is more complex, for example including both center and co-variance?

1

1 Answers

0
votes

To use a neural network for such a problem would be an overshoot.

Linear discriminant analysis and Gaussian naive Bayes do something similar to what you describe. They estimates centers of each class as arithmetical averages, and relate each point to the nearest center. But they calculate modified distances instead of Euclidean: GNB estimates conditional variances of each feature, and LDA also estimates covariances. And they also take into account prior class probabilities. These modifications would probably improve your classification, but if you don't want them, you can write an algorithm for it yourself.