2
votes

If we pass a varying from any geometry stage (vertex, geometry or tess shader) to fragment shader, we always loose some information. Basically, we loose it in two ways:

  • By interpolation: smooth, noperspective or centroid - does not matter. If we passed 3 floats (one per vertex) in geometry stage, we will get only one mixed float in fragment stage.
  • By discarding. When doing flat interpolation, hardware discards all values except one from provoking vertex.

Why does OpenGL not allow functionality like this:

Vertex shader:

// nointerp is an interpolation qualifier I would like to have
// along with smooth or flat.
nointerp out float val;

main()
{
    val = whatever;
}

Fragment shader:

nointerp in float val[3];
// val[0] might contain the value from provoking vertex,
// and the rest of val[] elements contain values from vertices in winding order.

main()
{
    // some code
}

In GLSL 330 I need to make integer indexing tricks or divide by barycentric coordinates in fragment shader, if I want values from all vertices.

Is it hard to implement in hardware, or is it not widely requested by shader coders? Or am I not aware of it?

1

1 Answers

3
votes

Is it hard to implement in hardware, or is it not widely requested by shader coders?

It is usually just not needed in the typical shading algorithms. So traditionally, there has been the automatic (more or less) interpolation for each fragment. It is probably not too hard to implement in current gen hardware, because at least modern desktop GPUs typically use "pull-model interpolation" (see Fabian Giesen's blog article) anyway, meaning the actual interpolation is done in the shader already, the fixed-function hw just provides the interpolation coefficients. But this is hidden from you by the driver.

Or am I not aware of it?

Well, in unextended GL, there is currently (GL 4.6) no such feature. However, there are two related GL extensions:

which basically provide the features you are asking for.