9
votes

In both the OpenGL and Direct3D rendering pipelines, the geometry shader is processed after the vertex shader and before the fragment/pixel shader. Now obviously processing the geometry shader after the fragment/pixel shader makes no sense, but what I'm wondering is why not put it before the vertex shader?

From a software/high-level perspective, at least, it seems to make more sense that way: first you run the geometry shader to create all the vertices you want (and dump any data only relevant to the geometry shader), then you run the vertex shader on all the vertices thus created. There's an obvious drawback in that the vertex shader now has to be run on each of the newly-created vertices, but any logic that needs to be done there would, in the current pipelines, need to be run for each vertex in the geometry shader, presumably; so there's not much of a performance hit there.

I'm assuming, since the geometry shader is in this position in both pipelines, that there's either a hardware reason, or a non-obvious pipeline reason that it makes more sense.

(I am aware that polygon linking needs to take place before running a geometry shader (possibly not if it takes single points as inputs?) but I also know it needs to run after the geometry shader as well, so wouldn't it still make sense to run the vertex shader between those stages?)

1

1 Answers

6
votes

It is basically because "geometry shader" was a pretty stupid choice of words on Microsoft's part. It should have been called "primitive shader."

Geometry shaders make the primitive assembly stage programmable, and you cannot assemble primitives before you have an input stream of vertices computed. There is some overlap in functionality since you can take one input primitive type and spit out a completely different type (often requiring the calculation of extra vertices).

These extra emitted vertices do not require a trip backwards in the pipeline to the vertex shader stage - they are completely calculated during an invocation of the geometry shader. This concept should not be too foreign, because tessellation control and evaluation shaders also look very much like vertex shaders in form and function.

There are a lot of stages of vertex transform, and what we call vertex shaders are just the tip of the iceberg. In a modern application you can expect the output of a vertex shader to go through multiple additional stages before you have a finalized vertex for rasterization and pixel shading (which is also poorly named).