1
votes

I've been trying to work with more complicated shaders, and have run into issues with the coordinate systems used by the vertex shader and texture sampler. In short: they don't seem to make any sense, and when trying to test them I end up getting inconsistent results. To make matters worse, the internet has little in the way of documentation, and most of the information I've found seems to expect me to know how this works already. I was hoping someone could clarify the following:

  • The vertex shaders pass an (x, y, z) representing a location on the render target. What are acceptable values for x, y, and z?
  • How do x and y correspond to the width and height of the back buffer (assuming that it's the render target)?
  • How do x and y correspond to the width and height on an output texture (assuming that it's the render target)?
  • When x=0 and y=0 where does the vertex sit, location-wise?
  • The texture samplers sample a texture at a (u, v) coordinate. What are acceptable values for u and v?
  • How do u and v correspond with the width and height of the texture being sampled?
  • How do AGAL's wrap, clamp, and repeat flags alter sampling, and what is the default behavior when one isn't given?
  • when sampling at u=0 and v=0, which pixel is returned location-wise?

EDIT: From my tests, I believe the answers are:

  • Unsure
  • -1 is left/bottom, 1 is right/top
  • Unsure
  • At the center of the output
  • Unsure
  • 0 is left/bottom, 1 is right/top
  • Unsure
  • The far bottom-left of the texture
1
Are you learning this for your own understanding, or so that you can produce something using Stage3D?Marty
Both! I've worked with pixel shaders before, but they were higher-level, and thus syntactically different, from the ones used in Flash. If these questions are answered, I should be able to proceed without any trouble. Before this question gets asked: I am not willing to work with an existing engine (Starling, Away3D, etc.) - my application works in such a way that these programs won't help very much, if at all.Conduit
Sure, that's good to know. I don't have the brain for this level of programming so I was indeed going to recommend something existing :-)Marty
I surely appreciate the attempt, in any case!Conduit

1 Answers

1
votes
  • You normally use the coordinate system of your own and then multiply the position of each vertex by MVP (model-view-projection) matrix to get NDC coordinates that can be fed to GPU as an output of vertex shader. There is a nice article explaining all that for Stage3D.
  • Correct. And z is in range [0, 1]
  • Rendering to a render target is the same as rendering to backbuffer - you output NDC from your vertex shader so the real size of the texture is irrelevant.
  • Yup, center of the screen.
  • Normally, it`s [0, 1] but you can use values that go out of that range and then the output depends on texture wrap mode (like repeat or clamp) set on the sampler.
  • (0, 0) is left/top, (1, 1) is right/bottom.
  • Default one is repeat. Those modes decide what you will get when you sample using coordinate that is out of range of [0, 1]. With repeat [1.5, 1.5] will result in [0.5, 0.5] while [1.0, 1.0] will be the result if the mode is set to clamp.
  • Top-left pixel of the texture.