0
votes

Comparison of Space and Time Complexity in Big-O between kNN (Nearest Neighbor) and any Classifier?

Thoughts about kNN:

For a case base used in kNN with n examples and d features per example, the time complexity is O(n x k x d). It is the worst amount of computations that need to be computed.

Space complexity: I would say it is O(n*+1** x k x d)* - the plus one is for the query.

Thoughts about a Classifier:

Takes zero neighbours for classification and does not require to persist examples in the memory so n=0 and k=0. The feature dimension d remains. In addition, we have parameters for the classifier. If it is a single fully connected neural network layer (feedforward / MLP) with o output neurons and a softmax layer, we could say:

Time complexity: O(d x o + o)

where d is the dimension of input features and o of the output, resulting in d x o multiplications. And the adding of o is for the softmax layer.

Space complexity: O(d x o + o) since I have to load only one example with d features into my memory and the weights of the network.

So since o << k x n, we can say that a classifier's space and time complexity is smaller than for using kNN. Right?

Are these thoughts correct?