0
votes

I have implemented a deferred rendering and am trying to use multisample textures for anti aliasing.

I render the scene into a FBO with multisample textures, use glBlit to create regular textures in a second FBO and finally bind the texture to the lighting shader that produces the final image.

// draw to textures
mMultiGeometryFBO->bind();
glViewport(0,0,mWidth,mHeight);
glEnable(GL_DEPTH_TEST);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );

// calling all modules to draw to FBO
for(auto r : mRenderer)
    r->renderMaterial(camera);

glBindFramebuffer(GL_READ_FRAMEBUFFER, mMultiGeometryFBO->fbo());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mGeometryFBO->fbo());

glReadBuffer(GL_COLOR_ATTACHMENT0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glBlitFramebuffer(0, 0, mWidth, mHeight,
                  0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_LINEAR);

glReadBuffer(GL_COLOR_ATTACHMENT1);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
glBlitFramebuffer(0, 0, mWidth, mHeight,
                  0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);

glReadBuffer(GL_COLOR_ATTACHMENT2);
glDrawBuffer(GL_COLOR_ATTACHMENT2);
glBlitFramebuffer(0, 0, mWidth, mHeight,
                  0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);

// draw to screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_DEPTH_BUFFER_BIT);

mSkybox->renderMaterial(camera);

mShader->use();
mShader->setTexture("tDiffuse", mDiffuseColor, 0);
mShader->setTexture("tNormal", mNormals, 1);
mShader->setTexture("tMaterial", mMaterialParams, 2);
mShader->setTexture("tDepth", mDepthBuffer, 3);
mShader->setTexture("tLights", mLightColor, 4);
mQuad->draw();

This produces a visible line at the horizon (between geometry and skybox). The color is the clear color. Only clearing the depth reduces the problem when moving. Rendering the SkyBox to the FBO before rendering the geometry produces less visible artifacts, but the line is still there.

Edit: forgot the picture enter image description here

1

1 Answers

1
votes

Resolving the multisample target before the lighting pass does not make sense, conceptually. What you will get is that the values in your gbuffers will be averaged at the edges of objects. This is especially bad for the normal directions. Think about it: If you have a pixel which contains 50% of your ground plane, and 50% of your sky, you will get a normal direction which is (normal_ground + normal_sky)/2. This is totally different from calculating the final color of each of this parts with their original normal and mixing the resulting colors.

If you want to do multisampling with deferred rendering, you have to use the multisampling target for the lighting, and will have to enable per sample shading and actually access and light each sample individually, and only blit the final result to a non-multisampled target. However, that will be exorbitantly expensive. You especially lose the benefits of multisampling vs. supersampling.

I don't know if there are some neat tricks trick to still work with multisampling in a more efficient way, but the usual approach is to not use multisampling at all and doing the anti-aliasing via some image-based postprocessing pass.