I have two textures with different coordinates and sizes in my fragment shader:
varying highp vec2 v_currentTextureCoords;
varying highp vec2 v_backgroundTextureCoords;
uniform sampler2D u_currentTexture;
uniform sampler2D u_backgroundTexture;
void main()
{
vec4 currentColor = texture2D(u_currentTexture, v_currentTextureCoords);
vec4 backgroundColor = texture2D(u_backgroundTexture, v_backgroundTextureCoords);
gl_FragColor = backgroundColor;
}
Here is the corresponding vertex shader:
attribute vec4 a_position;
attribute vec2 a_currentTextureCoords;
attribute vec2 a_backgroundTextureCoords;
varying vec2 v_currentTextureCoords;
varying vec2 v_backgroundTextureCoords;
void main()
{
gl_Position = a_position;
v_currentTextureCoords = a_currentTextureCoords;
v_backgroundTextureCoords = a_backgroundTextureCoords;
}
Those shaders are responsible for rendering u_currentTexture
.
EDIT : updated post with my real application (Android app) and problem.
As you can read above, the two textures are:
u_backgroundTexture
: it's a video stream, full screen, size is 1080x1920u_currentTexture
: it can be any image, size is 469x833 (smaller but same ratio).
To make it simple, for now, I don't want to blend anything, but just display the pixels of u_backgroundTexture
in the shader program of u_currentTexture
.
As you can see, the rendered image with the shaders above (top left corner, not the whole image), is the same as the background image, but scaled down to fit in a smaller rectangle. That's not what I want.
I want to display the pixels which are "behind" u_currentTexture
(those of u_backgroundTexture
), so in the end, one wouldn't even notice there are two textures.
But since the textures have different sizes and coordinates, it doesn't give this result at all (for now, it's what you see above).
Then, in my fragment shader, I managed to "scale" the texture so the image in the top left corner has the same "zoom" as the background image:
To do this, I modified my fragment shader:
varying highp vec2 v_currentTextureCoords;
varying highp vec2 v_backgroundTextureCoords;
uniform sampler2D u_currentTexture;
uniform sampler2D u_backgroundTexture;
uniform vec2 u_scaleRatio; // Notice here
void main()
{
vec4 currentColor = texture2D(u_currentTexture, v_currentTextureCoords);
vec4 backgroundColor = texture2D(u_backgroundTexture, v_backgroundTextureCoords * u_scaleRatio); // And here
gl_FragColor = backgroundColor;
}
I set u_scaleRatio
in my program with glUniform2fv()
. The values are basically (pseudo-code):
u_scaleRatio = vec2(currentTextureWidth / backgroundTextureWidth, currentTextureHeight / backgroundTextureHeight);
As you can see, it's almost working, but it looks like there is an offset on the X axis: the rendered image in the top left corner is actually what we see in the top right corner... I can't find a way to correct it.
How can I modify my shaders so I can correct this offset, knowing that the textures have different sizes and coordinates?
EDIT 2 :
To answer a comment below, this is a screenshot with
vec4 backgroundColor = texture2D(u_backgroundTexture, v_currentTextureCoords);
and with:
vec4 backgroundColor = texture2D(u_backgroundTexture, v_currentTextureCoords * u_scaleRatio);
EDIT 3 : to reply to this answer, here is a screenshot with:
uniform vec2 texelSize; // (1.0/windowResolution.x, 1.0/windowResolution.y)
vec2 uv = gl_FragCoord.xy * texelSize;
vec4 backgroundColor = texture2D(u_backgroundTexture, uv);
I understand what you're trying to do, I think it would have worked if vertex position and texture coordinates were the same for both textures but... they're not. Do you have an idea how to fix it?
vec4(1, 0, 0, 1) * vec4(0, 0, 1, 1) = vec4(0, 0, 0, 1)
You should use another blending method likemix
function. – MarGenDoa_currentTextureCoords
? That is probably where the distortion comes from. – MarGenDo