2
votes

I'm trying to perform the same transform in javascript as in a vertex shader. My vertex shader is transforming vertices like a typical WebGL example:

gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0)

My canvas is 600x600px (but the canvas size turns out to be irrelevant in this case.)

The resulting gl_Position.xy is not what I expected. Rather than being in the range 0,600 or -300,300 or -1,1, it seems to be roughly -6,6. (I ended up writing tests in the shader eg: if the transformed gl_Position.x > 5.0 color it red.)

That -6,6 (or 12x12) drawing area remains constant regardless of canvas size.

After adding a scale factor and tweaking it by eye, I've managed to sync my javascript transform with the shader transform. But how do I get the size of the drawing area of the transformed vertices from WebGL? How is that -6,6 range determined?

1

1 Answers

3
votes

If you are truly trying to match the vertex transforms, you neglected a few operations that occur AFTER programmable vertex transformation.

You forgot that gl_Position.xyz still has a few transformations to go before you have the NDC or screen space coordinates you discussed in your question.

  1. There is a perspective divide that occurs after the vertex shader and before rasterization (pos.xyz /= pos.w)

    • Given an orthographic projection matrix this should be a no-op, pos.w will always be 1.0...

  2. There is also a viewport transform that will take the coordinates out of NDC space and into screen space.

    • The viewport transform is a little different from all the rest, since the viewport is not defined using a matrix, it basically just defines a scale and bias operation and has no affect on clipping.


Take a few moments to review this illustration:


(source: songho.ca)

The output from your vertex shader is labeled Clip Coordinates (clip space) in this diagram, and you should be aware that primitive assembly and rasterization are where the transformation into Window Coordinates (screen space) is finally completed. Moreover, the two operations discussed in bullet points above are not part of the programmable pipeline.

In case you were wondering, the diagram is from Song Ho Ahn (안성호)'s website. It is part of an article titled 『OpenGL Transformation』, which may help to reinforce what I discussed in this answer.