I would like to map an image onto a triangle mesh. For each triangle, I know the positions of the vertices in UV space as well as in the image/texture space.
OpenGL solves this problem by defining the correspondences between vertices in UV space and image/texture space.
- Is OpenGL just defining a linear transformation between the correspondences to determine how to interpolate the texture?
- Why isn't necessary to define a homography to do the mapping? In what cases would defining a homography be necessary? (This would necessitate having four point correspondences instead of three.)