I have been reading a lot about support vector machines, and in all the books and online articles I've seen, an SVM is categorized as a linear classifier that uses a hyperplane. If data cannot be linearly separable, then the data can be mapped to a higher dimension to enable a linear boundary.
Now, I've come across some articles and slides by Professor Pedro Domingos from U. of Washington, a well-known expert in machine learning. He specifically categorizes SVM as an instance-based machine learning algorithm, similar to kNN. Can anyone explain that to me?
For example, in an article in Communications of the ACM (October 2012), he specifically puts SVM under "instances"-based representation, when most machine learning folks would put it under "hyperplanes" with logistic regression.

Furthermore, in his lecture slides, he gives this reasoning:


Can someone explain this line of reasoning? Why would SVM be an instance-based learner (like KNN) instead of a linear classifier (like logistic regression)?