5
votes

So I've got a triangle:

And I've got a vertex shader:

uniform mat4 uViewProjection;
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoords;
varying vec2 vTextureCoords;

void main(void) {
  vTextureCoords = aTextureCoords;
  gl_Position = uViewProjection * vec4(aVertexPosition, 1.0);
}

And I've got a fragment shader:

precision mediump float;
uniform sampler2D uMyTexture;
varying vec2 vTextureCoords;

void main(void) {
  gl_FragColor = texture2D(uMyTexture, vTextureCoords);
}

And I feed in three sets of vertices and UVs, interleaved:

# x,  y,    z,   s,   t
0.0,  1.0,  0.0, 0.5, 1.0
-1.0, -1.0, 0.0, 0.0, 0.0 
1.0, -1.0,  0.0, 1.0, 0.0

How does the fragment shader know to draw pixel A differently from pixel B? What changes?

1
You fed in a texture right? Why would you expect them to be the same or are you asking how shaders work on the GPU?Jesus Ramos
@JesusRamos I'm asking how shaders work. I don't understand what's changing and who's changing it between shader passes.a paid nerd
Check out @genpfault's answer. I was going to say the same thing but he beat me to it :)Jesus Ramos
Check this great answer stackoverflow.com/q/14246604/187752, when OpenGL might be sampling values you are not expecting.Kimi

1 Answers

7
votes

As I understand it the rasterization stage of the GL pipeline interpolates vTextureCoords across the triangle face, running the fragment shader on each pixel with the interpolated value.