0
votes

I am learning about how to use framebuffers in opengl. From what I understand so far, if I want to render an image to a framebuffer I would first create a texture and then attach it to a framebuffers colour attachment. The code might look something like this:

glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

glBindTexture(GL_TEXTURE_2D, 0);

This code generates a texture handle, binds it, and generates an empty texture for the target GL_TEXTURE_2D. Because the format is specified as GL_RGB and the data type is GL_UNSIGNED_BYTE, then after rendering to this texture the data would look like:

R   G   B   R   G   B   R   G   B   R   G   B   R   G   B   R ...
23  25  40  1   4   67  255 255 255 0   0   1   3   5   55  72 ...

So there are 3 channels each composed of the single byte data. Now I think I can attach this texture as a colour attachment of a framebuffer like this:

glGenFramebuffers(1, &fbohandle);
glBindFramebuffer(GL_FRAMEBUFFER, fbohandle);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, handle, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);

Hopefully my understanding of this is correct. Now consider trying to render depth values to a texture instead. This might look like:

glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

glBindTexture(GL_TEXTURE_2D, 0);

glGenFramebuffers(1, &fbohandle);
glBindFramebuffer(GL_FRAMEBUFFER, fbohandle);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, handle, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);

In this case I generate an empty texture this time able to store one float value for the depth (where the texture has a single channel per pixel). So each value in our depth texture has a 32 bit precision. I have often seen referenced however that the depth component is stored a 16 or 24 bits. For example in theUnity engine documentation on RenderTextures it is stated that "On OpenGL it is the native "depth component" format (usually 24 or 16 bits), on Direct3D9 it is the 32 bit floating point ("R32F") format."

Indeed looking at the Khronos wiki I see that there exists GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, GL_DEPTH_COMPONENT32 and GL_DEPTH_COMPONENT32F. If I were to use GL_DEPTH_COMPONENT24 for the format of my texture (to create a depth texture with 24 bits of precision), what should I use as my data type? As far as I know there isn't a 24 bit float. Is my understanding of framebuffers and textures (as described here) correct in terms of how the data is stored? How would one create a 24 (or 16) bit precision depth buffer?

1

1 Answers

1
votes

If I were to use GL_DEPTH_COMPONENT24 for the format of my texture (to create a depth texture with 24 bits of precision), what should I use as my data type?

GL_UNSIGNED_INT. But according to this answer, it’s not important unless you want to access depth buffer data from CPU: https://stackoverflow.com/a/19307871/126995

How would one create a 24 (or 16) bit precision depth buffer?

16 bit depth buffers contain integers just like 24 bit buffers. The correct data type for them is GL_UNSIGNED_SHORT.