1
votes

So far, my understanding of cube mapping has been that 3D texture coordinates need to be specified for each vertex used within a cube, as opposed to (u,v) coordinates for 2D textures.

Some Assumptions

  • Cube maps use normalized vertices to represent the texture coordinates of a triangle.

  • These normalized vertices are akin to the actual vertices specified: the normalized texture coordinates use the magnitude of their corresponding vertices.

  • Thus, if a vertex has a unit magnitude of 1, then its normalized texture coordinate, N, is 1.0f / sqrt(3.0f );

Which of these assumptions are correct and incorrect? If any are incorrect, please specify why.

Edit

While not necessary, what would be appreciated is an example or, rather, an idea of what the recommended way of going about it would be - using programmable pipeline.

1
Sorry, but your terminology it completely messed up.datenwolf
Yup, I realized that after I did some more digging. You're referring to the fact that it uses vertex normals instead, right?zeboidlund
No, I'm not referring to normals. Cubemaps use 3-dimensional texture coordinates. However the texture coordinate does not designate a position, but a direction, i.e. a ray originating from the center of the cube.datenwolf

1 Answers

3
votes

Cubemaps are textures that consist of 6 quadratic textures arranged in a cube topology. The only key quantity of cubemap texture coordinates is their direction. In a cubemap its texels are addressed by the direction of a vector originating in the cube's center. It doesn't matter which length the texture coordinate vector has. Say you got two cube map texture coordinates

(1, 1, 0.5)

and

(2, 2, 1)

they both address the same cubemap texel.