1
votes

The specification for the OpenGL method glTexImage2D() gives a large table of accepted internalFormat parameters. I'm wondering though, if it really matters what I set this parameter as, since the doc says

If an application wants to store the texture at a certain resolution or in a certain format, it can request the resolution and format with internalFormat. The GL will choose an internal representation that closely approximates that requested by internalFormat, but it may not match exactly.

which makes it seem as though OpenGL is just going to pick what it wants anyways. Should I bother getting an images bit depth and setting the internalFormat to something like GL_RGBA8 or GL_RGBA16? All the code examples I've seen just use GL_RGBA...

2
The reference manual is not the OpenGL specification. It's just a webpage.Nicol Bolas

2 Answers

2
votes

which makes it seem as though OpenGL is just going to pick what it wants anyways.

This is very misleading.

There are a number of formats that implementations are required to support more or less exactly as described. Implementations are indeed permitted to store them in larger storage. But they're not permitted to lose precision compared to them. And there are advantages to using them (besides the obvious knowledge of exactly what you're getting).

First, it allows you to use specialized formats like GL_RGB10_A2, which is handy in certain situations (storing linear color values for deferred rendering, etc). Second, FBOs are required to support any combination of image formats, but only if all of those image formats come from the list of required color formats for textures/renderbuffers (but not the texture-only). If you're using any other internal formats, FBOs can throw GL_FRAMEBUFFER_UNSUPPORTED at you.

Third, immutable texture storage functions require the use of sized internal formats. And you should use those whenever they're available.

In general, you should always use sized internal formats. There's no reason to use the generic ones.

0
votes

Using a generic internal format OpenGL will choose whatever it "likes" best, and tell it that you don't care. With an explicit internal format, you're telling OpenGL, that you actually care about the internal representation (most likely because you need the precision). While an implementation is free to up- or downgrade if an exact match can not be made, the usual fallback is to upgrade to the next higher format precision that can satisfy the requested demands.

Should I bother getting an images bit depth and setting the internalFormat

If you absolutely require the precision, then yes. If your concerns are more about performance, then no, as the usual default of the OpenGL implementations being around, is to choose the internal format for best performance if no specific format has been requested.