1
votes

I'm working with Direct3D 11 and HLSL. I use four different shaders (vertex, hull, domain and pixel).

I always have troubles using the right coordinate space in my shaders. Could somebody identify the appropriate space for the vertex, hull, domain and pixel shader stages?

3
Are you talking general coordinate spaces within the shaders, or whether the shaders expect things to be in right/left handed coordinate systems?Alex
The coordinate spaces within the shaders. the right/left handed coordinate systems I already understand.Jinxi

3 Answers

1
votes

Typically, vertices are streamed into the vertex shader in patch space, a sort of local coordinate system that describes the areas to be tesselated (patches), but may not describe the model itself (polygons). The hull and domain shaders work on these patches. The hull shader runs in patch space, while the domain shader runs half in patch space, passed from the hull shader control point function (a control point is a "patch level vertex"), and half in the tangent space of the patch (i.e. in the plane of the patch). This allows you to know the positions of the new vertices in relation to the control points of the patch. Once tesselation is complete, the model can be in either projection space, if you have no geometry shader, or in model space if you do. The geometry shader can then add or subtract vertices from the model, transform the results into projection space, and then pass them to the pixel shader. The final stage, the pixel shader, usually operates in screen space (i.e. if you query SV_Position, it will be in screen space).

While these are common, you can also directly specify coordinates in projection space (if it's more convenient for an algorithm), world space, local space, transformed local space, tangent space, normal space, light space... really anything you'd like. The typical pipeline spaces I've listed, but that, again, is only one example. It's really up to you.

3
votes

There are no restrictions on what kind of spaces you use in any of the shaders - you are free to use any that fits your purpose. Infact it's pretty common to use multiple spaces inside same stage - for example, using world-space coordinates of light source to calculate lighting in pixel shader. The only requirement is to send out SV_Position into rasterizer in clip space, so whatever you last stage is before rasterization need to do that.

0
votes

Here the answer I got on Gamedev from Josh Petrie:

  • The vertex shader takes input data from vertex buffers (which are typically in model space), transforms that input, and produces output data in clip space. After the vertex shader stage, the perspective divide (by the output w coordinate) occurs.

  • The pixel shader deals with fragments, not vertices, but when you do make use of coordinates (via SV_Position for example), those coordinates are in screen space (offset by 0.5). This means they range from zero to one.

  • The hull and domain shaders operate on input data and produce output data within the same space. Because the input data is generally in model space, the output data is also in model space.

That said, these are just canonical defaults. Since you are controlling the input data and the interpretation of that input data, you can place that data in any coordinate frame that is useful to you. For example, when faking 2D graphics in a 3D API it can be useful to simply place pixel coordinate input data into vertex buffers and have a very simple, almost no-op vertex shader.

There are some aspects of the pipeline outside your programmable control though. For example, SV_Position will always be offset screen space. The biggest thing to worry about it actually the output of the vertex shader stage. The rest of the pipeline will assume that output is in clip space, perform clipping, and perform the perspective division by w. Consequently you may need to set w accordingly (often to 1.0) if you want to "avoid" this division.

answers from gamedev