If we pass a varying from any geometry stage (vertex, geometry or tess shader) to fragment shader, we always loose some information. Basically, we loose it in two ways:
- By interpolation: smooth, noperspective or centroid - does not matter. If we passed 3 floats (one per vertex) in geometry stage, we will get only one mixed float in fragment stage.
- By discarding. When doing
flat
interpolation, hardware discards all values except one from provoking vertex.
Why does OpenGL not allow functionality like this:
Vertex shader:
// nointerp is an interpolation qualifier I would like to have
// along with smooth or flat.
nointerp out float val;
main()
{
val = whatever;
}
Fragment shader:
nointerp in float val[3];
// val[0] might contain the value from provoking vertex,
// and the rest of val[] elements contain values from vertices in winding order.
main()
{
// some code
}
In GLSL 330 I need to make integer indexing tricks or divide by barycentric coordinates in fragment shader, if I want values from all vertices.
Is it hard to implement in hardware, or is it not widely requested by shader coders? Or am I not aware of it?