0
votes

I'm trying to pass an array of ints into the fragment shader by using a 1D texture. Although the code compiles and runs, when I look at the texture values in the shader, they are all zero!

This is the C++ code I have after following a number of tutorials:

GLuint texture;
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0 + 5); // use the 5th since first 4 may be taken
glBindTexture  (GL_TEXTURE_1D, texture);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RED_INTEGER, myVec.size(), 0, 
                               GL_RED_INTEGER, GL_INT, &myVec[0]);

GLint textureLoc =  glGetUniformLocation( program, "myTexture" );
glUniform1i(textureLoc, 5); 

And this how I try and access the texture in the shader:

uniform sampler1D myTexture; 
int dat = int(texture1D(myTexture, 0.0).r); // 0.0 is just an example 
if (dat == 0) { // always true!

I'm sure this is some trivial error on my part, but I just can't figure it out. I'm unfortunately constrained to using GLSL 1.20, so this syntax may seem outdated to some.

So why are the texture values in the shader always zero?

EDIT:

If I replace the int's with floats, I still have a problem:

std::vector <float> temp; 
// fill temp...
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage1D(GL_TEXTURE_1D, 0, GL_R32F, temp.size(), 0, GL_R32F, GL_FLOAT, &temp[0]);
// ...
glUniform1f(textureLoc, 5);

This time, just reading from the sampler seems to mess up the other textures..

1
I think the result is normalized to [0,1] float for int formats texture fetches. Try a isampler1D sampler - a.lasram
@AndonM.Coleman - isampler1D won't compile, and GL_R16I and GL_R32I don't change anything. - nbubis
@AndonM.Coleman - It doesn't seem to work with floats either - I've added the code to the question. Thank you again for your time! - nbubis
See my answer, and do not use a floating-point texture. Use a regular fixed-point (unsigned normalized is the technical term) texture. So something like glTexImage1D (GL_TEXTURE_1D, 0, GL_R8, myVec.size (), GL_RED, GL_UNSIGNED_BYTE, &myVec [0]);. You will need to make adjustments to the size and type and the GLSL shader if you need to store more than 256 values. - Andon M. Coleman

1 Answers

2
votes

To begin with, GL_RED_INTEGER is wrong for the internal format. I would use GL_R32I (32-bit signed integer) instead, you could also use GL_R8I or GL_R16I depending on your actual storage requirements - smaller types are generally better. Also, do not use a sampler1D for an integer texture, use isampler1D.

Since OpenGL ES does not support data type conversion when you use a pixel transfer function (e.g. glTexture2D (...)), you can usually find the optimal combination of format, internal format and type in a table if you look through the OpenGL ES docs.


GL_R32IGL_RED_INTEGERGL_INVALID_ENUM

That is not to say you cannot pack an integer value into a standard fixed-point texture and get an integer value back out in a shader. If you use an 8-bit per component format (e.g. GL_R8) you can store values in the range 0 - 255. In your shader, after you do a texture lookup (I would use GL_NEAREST for the texture filter, filtering will really mess things up) you can multiply the floating-point result by 255.0 and cast to int. It is far from perfect, but we got along fine without integer textures for many years.

Here is a modification to your shader that does exactly that:

#version 120

uniform sampler1D myTexture;
int dat = (int)(texture1D (myTexture, (idx + 0.5)/(float)textureWidth).r * 255.0);
if (dat == 0) { // not always true!

Assumes GL_R8 for the internal format, use 65535.0 for GL_R16