0
votes

Suppose that we have a triangle mesh without information about normals and texture coordinates. (Basically an OBJ file with only vertices and face elements).

The objective is to show something decent using Opengl with a program written in C. To calculate the normals of every triangle is easy...

But what about texture mapping? Can anyone recommend me a simple algorithm/documentation/resource to map the normalized UV coordinates of an image to a generic mesh of triangles?

(For a mesh with a single triangle it is easy, ex:  [0][0], [1][0], [0][1]) 

The result doesn't have to be perfect, even professional softwares can't do that without UV unwrapping and UV seams.

1

1 Answers

0
votes

The only algorithm I know is for 2D screen coordinates (screen space):

I already answered a question similar to this here, focus on the algorithm (ie., texturePos = (vPos - 0.5) * 2) of conversion between textureCoords and 2D vertices

EDIT:

Note; The following is a theory:

There might be a method with 3D space. Eventually the transformations lead to the vertices being rendered in 2D screen coordinates.

local space --> world space --> view space --> NDC space --> screen coordinates

Using the general convention above and the 3 matrices (Model, View, Projection),

and since the vertices will end up in 2D space, you could create some form of algorithm to back track the textureCoordinates using the inverse Matrices back to 3D space and move on from there.

This, btw, still is not a defined and perfect algorithm (maybe there is and someone will edit and add the algorithm here in the future...)