The specification for the OpenGL method glTexImage2D() gives a large table of accepted internalFormat
parameters. I'm wondering though, if it really matters what I set this parameter as, since the doc says
If an application wants to store the texture at a certain resolution or in a certain format, it can request the resolution and format with internalFormat. The GL will choose an internal representation that closely approximates that requested by internalFormat, but it may not match exactly.
which makes it seem as though OpenGL is just going to pick what it wants anyways. Should I bother getting an images bit depth and setting the internalFormat
to something like GL_RGBA8
or GL_RGBA16
? All the code examples I've seen just use GL_RGBA
...