1
votes

I have 2 shader programs - one for rendering sprites with textures and the second one to render polygons. I have enabled blending and Z-buffer like so:

GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);

GLES20.glEnable( GLES20.GL_DEPTH_TEST );
GLES20.glDepthFunc( GLES20.GL_LEQUAL );
GLES20.glDepthMask( true ); 
GLES20.glDepthRangef(0,  maxZDepth); //maxZDepth = 100f;

And my rendering consists of 2 rendering invocations (glDrawElements): one for sprites and right after it for polygons with adequate shader programs... The order of sending object's data (vertices etc...) to shader is sorted from the objects' lowest Z value to the highest and I also had to add such instruction to my sprites' shader:

if(gl_FragColor.a == 0.0)
    discard;

Now, the blending and Z-buffer are working properly but only in the scope of one shader at a time. The blending of the objects drawn by first shader doesn't seem to be relevant for the second shader... Here's an example:

enter image description here

The sprite here has higher Z value than the brown polygon beneath it and that's why it's drawn on the polygon but blending fails and you can see the grey background (created by glClearColor) showing around the sprite...

Does anybody know some good solution to this problem? I thought about combining 2 shader programs into one and then there would be only 1 rendering invocation which I hope would solve it but I'd prefer to preserve 2 separate shader programs for sprites and polygons...

1
You're drawing the sprite first, presumably? - Tommy
Oh my... I was thinking of some stupid complicated solutions and it turned out that the only thing to change was to render polygons before rendering sprites. I don't know why I haven't tried that out earlier... Thanks a lot! - Savail

1 Answers

2
votes

Based on the brief comment discussion, the issue is:

The depth buffer holds only one depth per pixel. A partially transparent pixel combines two colours from different depths. But it can be assigned only one depth. That ends up being the depth of the closer pixel.

In an ideal world if you drew something far away and opaque, then something near and transparent, then something in between and opaque then the final output would be a mix of the thing in between and the thing near. What will actually happen is the transparent thing will set its depth to the depth buffer. When you come to draw the in between thing, no pixels will be output because it is further away than the nearest thing in the depth buffer. So you'll end up with the far away thing mixed with the near thing, as though the in between thing had never been drawn.

There are a bunch of solutions depending on how strictly accurate you want to be, how much of your geometry is at least partly transparent and how much time you have.

First of all, if you have any geometry that is definitely completely opaque then you can just draw all of that first, in whichever order is most efficient.

Sorting transparent geometry and rendering back to front is the most obvious solution. That's great except that not all geometry can be drawn correctly by just going back to front (see e.g. mutual overlap) and in the most naive implementation your GL state changes can end up being hugely expensive as possibly instead of drawing 50,000 triangles with one shader, then switching and drawing 50,000 with a second, you draw a triangle, switch, draw, switch, draw, etc, for 99,999 switches.

If additive transparency is acceptable then you can just fill in the transparent stuff in any old order without writing to the depth buffer at all.

Something Nvidia proposed was taking advantage of multi-sampling but possibly ramping it up a little. Imagine more like 8x8 or 16x16 pixel samples being combined for each pixel. In that case you draw transparency not by reading from the frame buffer, mixing, and writing out again, but by just writing e.g. only half the samples in each cell if transparency is 50%. You pick the half randomly. That gives you order-independent transparency of increasing quality as you increase the cell size.

Assuming your polygons are always opaque and your sprites all potentially partially transparent then, as I think you're now doing, draw the polygons first, then draw the sprites in back-to-front order.