2
votes

Why exactly do the glTexImage1D, glTexImage2D, and glTexImage3D functions require an internal format (i.e. GL_RGBA8,GL_R32UI,etc..) AND a pixel format? (GL_RGBA,GL_RED_INTEGER)

It would seem to me that the pixel format could easily be interpreted from the internal format. I ask this not just out of curiosity, but because I'm making sure the OpenGL texture object and frame buffer object wrappers I've written (which extrapolate pixel format from internal format) are able to properly extract pixel format for all internal formats.

1

1 Answers

6
votes

The pixel format is the format in which you provide the texture data, the internal format is how OpenGL will store it internally. OpenGL will handle the conversion automatically.

The man page for glTexImage2D has a nice set of tables of the layout of the internal formats and also explains how each format is converted.

For example, if you had a pixel format GL_RG and an internal format of GL_COMPRESSED_RGBA, the GL will fill in the entire blue channel as 0 and the alpha channel as 1, then does it's own internal compression.

Each element is a red/green double. The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for blue, and 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1].