The program renders a bunch of things into an intermediate framebuffer that uses an unsigned normalized texture to store the data. The intermediate framebuffer is blended with the default framebuffer. The pixel shader used to render the intermediate for the blend with framebuffer 0 is the following:
#version 300 es
precision mediump float;
out vec4 fragColor;
in vec2 texCoords;
uniform sampler2D textureToDraw;
void main()
{
vec4 sampleColor = texture(textureToDraw, texCoords);
fragColor = vec4(sampleColor.rgb * -1.0, sampleColor.a);
}
The code for setting up the draw to framebuffer 0 is as follows:
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
// Draw the intermediate using the pixel shader above and a quad
The rgb components of fragColor
will be in the range [-1, 0]. The expected result is that the intermediate color is subtracted from the previous framebuffer contents. The actual result is that the color black is blended with correct alpha with framebuffer 0, which indicates that fragColor
is clamped to [0, 1] somewhere and the negative portion is discarded.
Is there a way to disable clamping of fragment shader outputs to [0, 1]?
I know that there is not a way to render to a signed normalized texture, so maybe there is an OpenGL limitation that prevents this. The alternative that I am thinking of is doing two render passes, one which renders the negative bits to an intermediate and the other that renders the positive ones to another intermediate. At the end, blend positive with GL_FUNC_ADD
and negative with GL_FUNC_REVERSE_SUBTRACT
. This is slow and cumbersome to maintain. Is there any other way?