0
votes

Environment:

  • Windows 10 version 1803
  • nVidia GeForce GTX 780 Ti
  • Latest driver 398.36 installed
  • Visual Studio 2015 Update 3
  • OpenGL 4.6

GLSL Source:

#version 460 core

in vec4 vPos;

void
main()
{
  float coeff[];
  int i,j;
  coeff[7] = 2.38;
  i=coeff.length();
  coeff[9] = 4.96;
  j=coeff.length();

  if(i<j)
    gl_Position = vPos;
}

My expectation is that i is 8 and j is 10 so gl_Position = vPos; should be executed, but shader debugging using Nsight shows me that both i and j are 10 so gl_Position = vPos; is bypassed for all vertices. What is the matter? Is it related to compiler optimization? If I want GLSL to be compiled as expected (i<j is true), how to fix the code? Thanks.

1

1 Answers

0
votes

This is both an incorrect use of yours, and a compiler bug (because it doesn't break when it should).

See what the specification has to say:

It is legal to declare an array without a size (unsized) and then later redeclare the same name as an array of the same type and specify a size, or index it only with integral constant expressions (implicitly sized).

OK so far, that's what you are doing. But now...

It is a compile-time error to declare an array with a size, and then later (in the same shader) index the same array with an integral constant expression greater than or equal to the declared size.

That's also what you are doing. First you set the size to 7, then to 9. That's not allowed, and it's an error to be detected at compile time. So, the fact that this "works" at all (i.e. no compiler error) is a compiler bug.

Now why do you see a size of 10 then? Don't ask me, who knows... my best guess would be the nVidia compiler works by doing "something" in such cases, whatever it is. Something, to make it work anyway, although it's wrong.