2
votes

If blending two textures of different size in the fragment shader, is it possible to map the textures to different coordinates?

For example, if blending the textures from the following two images:

black and white mask imageMatterhorn image

with the following shaders:

      // Vertex shader
      uniform mat4 uMVPMatrix;
      attribute vec4 vPosition;
      attribute vec2 aTexcoord;
      varying vec2 vTexcoord;
      void main() {
        gl_Position = uMVPMatrix * vPosition;
        vTexcoord = aTexcoord;
      }

      // Fragment shader
      uniform sampler2D uContTexSampler;
      uniform sampler2D uMaskTextSampler;
      varying vec2 vTexcoord;
      void main() {
        vec4 mask = texture2D(uMaskTextSampler, vTexcoord);
        vec4 text = texture2D(uContTexSampler, vTexcoord);
        gl_FragColor = vec4(text.r * mask.r), text.g * mask.r, text.b * mask.r, text.a * mask.r); 
      }

(The fragment shader replaces the white spaces of the black and white mask with the second texture).

Since both texture use the same gl_Position and coordinates (1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f), both texture get mapped to the same coordinates in the view:

enter image description here

However, my goal is to maintain the original texture ratio:

enter image description here

I want to achieve this within the shader, rather than glBlendFunc and glBlendFuncSeparate, in order to my own values for blending.

Is there a way to achieve this within GLSL? I have a feeling that my approach for blending texture vertices for different position coordinates is broken by design...

1

1 Answers

7
votes

It is indeed possible but you need to get a 2D scale vector for the mask texture coordinates. I suggest you to compute those on the CPU and send them to the shader via uniform, alternative is to compute them per vertex or even per fragment but you will still need to pass image dimensions into the shader so just do it on the CPU and add an uniform.

To get that scale vector you will need a bit of simple math. What you want to do is respect the mask ratio but scale it so one of the 2 scale coordinates is 1.0 and other is <=1.0. That means that you will either see the whole mask width or height while the opposite dimension will be scaled down. For instance if you have an image of size 1.0x1.0 and a mask of size 2.0x1.0 your scale vector will be (.5, 1.0).

To use this scaleVector you simply need to multiply the texture coordinates:

vec4 mask = texture2D(uMaskTextSampler, vTexcoord* scaleVector);

To compute the scale vector try this:

    float imageWidth;
    float imageHeight;
    float maskWidth;
    float maskHeight;

    float imageRatio = imageWidth/imageHeight;
    float maskRatio = maskWidth/maskHeight;

    float scaleX, scaleY;

    if(imageRatio/maskRatio > 1.0f) {
        //x will be 1.0
        scaleX = 1.0f;
        scaleY = 1.0f/(imageRatio/maskRatio);
    }
    else {
        //y will be 1.0
        scaleX = imageRatio/maskRatio;
        scaleY = 1.0f;
    }

Note I did not try this code so you might need to play around a bit.

EDIT: Scaling the texture coordinates fix

The above scale of texture coordinates makes the mask texture use the top-left part instead of centre part. The coordinates must be scaled around centre. That means get the original vector from centre which is vTexcoord-vec2(.5,.5) then scale this vector and add it back from the centre:

vec2 fromCentre = vTexcoord-vec2(.5,.5);
vec2 scaledFromCenter = fromCenter*scaleVector;
vec2 resultCoordinate = vec2(.5,.5) + scaledFromCenter;

You can put this into a single line and even try to shorten it a bit (do it on the paper first).