2
votes

The program renders a bunch of things into an intermediate framebuffer that uses an unsigned normalized texture to store the data. The intermediate framebuffer is blended with the default framebuffer. The pixel shader used to render the intermediate for the blend with framebuffer 0 is the following:

#version 300 es

precision mediump float;

out vec4 fragColor;

in vec2 texCoords;

uniform sampler2D textureToDraw;

void main()
{
    vec4 sampleColor = texture(textureToDraw, texCoords);
    fragColor = vec4(sampleColor.rgb * -1.0, sampleColor.a);
}

The code for setting up the draw to framebuffer 0 is as follows:

glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

// Draw the intermediate using the pixel shader above and a quad

The rgb components of fragColor will be in the range [-1, 0]. The expected result is that the intermediate color is subtracted from the previous framebuffer contents. The actual result is that the color black is blended with correct alpha with framebuffer 0, which indicates that fragColor is clamped to [0, 1] somewhere and the negative portion is discarded.

Is there a way to disable clamping of fragment shader outputs to [0, 1]?

I know that there is not a way to render to a signed normalized texture, so maybe there is an OpenGL limitation that prevents this. The alternative that I am thinking of is doing two render passes, one which renders the negative bits to an intermediate and the other that renders the positive ones to another intermediate. At the end, blend positive with GL_FUNC_ADD and negative with GL_FUNC_REVERSE_SUBTRACT. This is slow and cumbersome to maintain. Is there any other way?

1

1 Answers

2
votes

Is there a way to disable clamping of fragment shader outputs to [0, 1]?

Let me quote from section 17.3.6 Blending of the OpenGL 4.6 Core Profile Specification (emphasis mine):

If the color buffer is fixed-point, the components of the source and destination values and blend factors are each clamped to [0, 1] or [−1, 1] respectively for an unsigned normalized or signed normalized color buffer prior to evaluating the blend equation. If the color buffer is floating-point, no clamping occurs. The resulting four values are sent to the next operation.

So you can use one of the *16F or *32F formats to get rid of the clamping.

I know that there is not a way to render to a signed normalized texture, so maybe there is an OpenGL limitation that prevents this.

The spec marks the various _SNORM as color-renderable. However, at the same time, the spec does not mark these formats as required to be supported for a render target, so implementations can allow you to use such a format, but they don't have to, so yes, you can't relie on that in any way...