1
votes

I've just moved my rendering code onto my laptop and am having issues with opengl and glsl.

I have a vertex shader like this (simplified):

uniform float tile_size;

void main(void) {
gl_PointSize = tile_size;
// gl_PointSize = 12;
}

and a fragment shader which uses gl_Pointcoord to read a texture and set the fragment colour. In my c++ program I'm trying to bind tile_size as follows:

glEnable(GL_TEXTURE_2D); 
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);

GLint unif_tilesize = glGetUniformLocation(*shader program*, "tile_size");
glUniform1f(unif_tilesize, 12);

(Just to clarify I've already setup a program used glUseProgram, shown is just the snippet regarding this particular uniform)

Now setup like this I get one-pixel points and have discovered that opengl is failing to bind unif_tilesize (it gets set to -1).

If I swap the comments round in my vertex shader I get 12px point sprites fine.

Peculiarly the exact same code on my other computer works absolutely fine. The opengl version on my laptop is 2.1.8304 and it's running an ATI radeon x1200 (cf an nvidia 8800gt in my desktop) (if this is relevant...).

EDIT I've changed the question title to better reflect the problem.

2
A second problem which I think may be related is that event when I have 12px point sprites rendering (using gl_pointsize = 12 in the vert shader), gl_PointCoord doesn't seem to vary across each sprite...user1483596
Have you checked if program is successfully compiled and linked by glValidateProgram and glGetProgramiv? Do you always get -1 for uni location? It's normal for compiler to optimise code and exclude uniforms that are not used, i.e. when you comment line gl_PointSize = tile_size;. Have you tried to set version at the beginning of GLSL file. Also, check if your other machine (drivers) supports GL_PROGRAM_POINT_SIZE. I had once problem similar like yours. Turned out, I just needed to appended EXT to GL_PROGRAM_POINT_SIZE :) Also, is "_VERTEX" really a (needed) part of that define?Srđan
Thanks for the suggestions, the program is compiling and linking fine. I always get -1 for the tile_size uniform location. Other uniforms (a float and a sampler2D) and attributes bind fine. I didn't have version numbers but have added them with no change. I've tried glEnable GL_PROGRAM_POINT_SIZE, GL_PR..._SIZE_EXT, GL_VERTEX_PROGRAM_POINT_SIZE and combinations thereof but nothing seems to work.user1483596
Doing some googling about drivers though seems to show that plenty of people have had issues with ATI drivers and point sprite sizes... Looks like it's time two write a new renderer...user1483596
"running an ATI radeon x1200" In general, I wouldn't trust any GLSL code to run on any pre-HD ATI graphics card. They're a minefield of driver bugs, none of which will ever be fixed since ATI stopped supporting those cards years ago.Nicol Bolas

2 Answers

1
votes

You forgot to call glUseProgram before setting the uniform.

1
votes

So after another day of playing around I've come to a point where, although I haven't solved my original problem of not being able to bind a uniform to gl_PointSize, I have modified my existing point sprite renderer to work on my ATI card (an old x1200) and thought I'd share some of the things I'd learned.

I think that something about gl_PointSize is broken (at least on my card); in the vertex shader I was able to get 8px point sprites using gl_PointSize=8.0;, but using gl_PointSize=tile_size; gave me 1px sprites whatever I tried to bind to the uniform tile_size.

Luckily I don't need different sized tiles for each vertex so I called glPointSize(tile_size) in my main.cpp instead and this worked fine.

In order to get gl_PointCoord to work (i.e. return values other than (0,0)) in my fragment shader, I had to call glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE ); in my main.cpp.

There persisted a ridiculous problem in which my varyings were beign messed up somewhere between my vertex and fragment shaders. After a long game of 'guess what to type into google to get relevant information', I found (and promptly lost) a forum where someone said that in come cases if you don't use gl_TexCoord[0] in at least one of your shaders, your varying will be corrupted.

In order to fix that I added a line at the end of my fragment shader:

_coord = gl_TexCoord[0].xy;

where _coord is an otherwise unused vec2. (note gl_Texcoord is not used anywhere else).

Without this line all my colours went blue and my texture lookup broke.