3
votes

In order to implement "depth-peeling", I render my OpenGL scene in to a series of framebuffers each equipped with a rgba color texture and depth texture. This works fine if I don't care about anti-aliasing. If I do, then it seems the correct thing to do is enable GL_MULTISAMPLING and use a GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D. But I'm confused about which other calls need to be replaced.

In particular, how should I adapt my framebuffer construction to use glTexImage2DMultisample instead of glTexImage2D?

Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?

If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?

Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?

There are many related questions on this topic, but none of the solutions I found seem to point to a canonical tutorial or clear example putting all these together.

1
I think typically if you're doing things like depth peeling or deferred shading and rendering to FBOs, multisampling AA is off the table. I could be wrong so don't want to submit a formal answer, but I think generally if you're going these types of routes, the common solution is to apply image/pixel-based AA such as FXAA, MLAA, or SMAA as a manual post-process. It's becoming very common in games lately to do this, and I think it's because they are required to as a result of using these kinds of techniques that involve rendering to offscreen textures.user4842163
As for rendering what you drew offscreen, there is a glBlitFramebufferfunction for that purpose, but you can get a lot more flexibility just setting up an ortho projection and identify MV matrix and rendering textured quads using the FBO color attachment texture(s). It'll allow you to then process the results in a fragment shader and do post-processing like AA or deferred shading.user4842163
Thanks, Ike. I tried just rendering into a much larger buffer and downsampling with smoothing, but things like sharp 1px thick lines just don't come out as easily or nicely as proper multisampling on the screen. Not to mention it was pretty slow.Alec Jacobson
Depending on your quality/speed/effort kind of needs, there's quite a range of image-based AA algorithms that can run in a shader, including ones that require no additional video memory (no need to render to a larger buffer than usual). This article shows the FXAA method, for example, and the how it looks: geforce.com/whats-new/articles/…user4842163
This might also be of some relevance -- it discusses how to use offscreen multisampling: learnopengl.com/#!Advanced-OpenGL/Anti-Aliasing. It shows how to construct that multisampled FBO. It's worth noting how the article mentions glBlitFramebuffer to resolve an image from a multisample target and how we can access texels from a multisampled texture using sampler2DMS in the frag shader. One thing I'm not clear on is whether you can have a single multisample FBO contain both depth and color attachments at the same time. We might need two passes there.user4842163

1 Answers

1
votes

While I have only used multisampled FBO rendering with renderbuffers, not textures, the following is my understanding.

Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?

No, that's all you need. You create the texture with glTexImage2DMultisample(), and then attach it using GL_TEXTURE_2D_MULTISAMPLE as the 3rd argument to glFramebufferTexture2D(). The only constraint is that the level (5th argument) has to be 0.

If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?

Yes. If you attach a depth buffer to the same FBO, you need to use a multisampled renderbuffer, with the same number of samples as the color buffer. So you create your depth renderbuffer with glRenderbufferStorageMultisample(), passing in the same sample count you used for the color buffer.

Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?

Not for rendering into the framebuffer. Once you're done rendering, you have a couple of options:

  1. You can downsample (resolve) the multisample texture to a regular texture, and then use the regular texture for your subsequent rendering. For resolving the multisample texture, you can use glBlitFramebuffer(), where the multisample texture is attached to the GL_READ_FRAMEBUFFER, and the regular texture to the GL_DRAW_FRAMEBUFFER.

  2. You can use the multisample texture for your subsequent rendering. You will need to use the sampler2DMS type for the samplers in your shader code, with the corresponding sampling functions.

For option 1, I don't really see a good reason to use a multisample texture. You might just as well use a multisample renderbuffer, which is slightly easier to use, and should be at least as efficient. For this, you create a renderbuffer for the color attachment, and allocate it with glRenderbufferStorageMultisample(), very much like what you need for the depth buffer.