6
votes

I am using OpenGL to do some GPGPU computations through the combination of one vertex shader and one fragment shader. I need to do computations on a image at different scale. I would like to use mipmaps since their generation can be automatic and hardware accelerated. However I can't manage to get access to the mipmap textures in the fragment shader.

I enabled automatic mipmap generation: glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);

I tried using texture2DLod in the shader with no luck, it simply kept giving the normal texture. I also tried using glTextureParameteri(GL_BASE_LEVEL, X) in the main program and it did not change anything.

How would you do that?

I am using Linux. My graphic card is a Nvidia Quadro quite old. Here is my glxinfo output with all the supported extensions.

3
I upvote your question because it's "useful and clear", but you seem to do everything all right. What does glTextureSize(sampler, lod) says ?(in your fragment shader)Calvin1602
There is no such function available to me. I think my card is too old (I have just checked and the version of GLSL is GL_ARB_shading_language_100). That would explain a lot but I don't understand why the shader would accept the *Lod variant without complaining if it was unsupported...Jim
I don't understand either why the glTextureParameteri(GL_BASE_LEVEL, X) and glTextureParameteri(GL_MAX_LEVEL, X) calls have no effect either. This is not GLSL-related... I am pretty confused right now.Jim
I have got some news. I set glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST) and glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST_MIPMAP_NEAREST) and the texture2DLod calls now all return (1.0, 1.0, 1.0, 1.0)... I guess mipmaps lookups in fragment shaders are definitely not supported by my graphic card and I still think it's weird it did not complain. However, now the glTextureParameteri(GL_BASE/MAX_LEVEL, X) seem to work.Jim
One last thing: from the "related" section I read stackoverflow.com/questions/3071006/…. I then tried the texture2D(sampler, coord, bias) structure and I could access the desired mipmap level in the fragment shader. I think my problem is solved then but I am surprised by the absence of runtime errors when trying to use unsupported features... Or maybe there is something else I did wrong :).Jim

3 Answers

3
votes
gvec4 textureLod (gsampler1D sampler, float P, float lod)
gvec4 textureLod (gsampler2D sampler, vec2 P, float lod)
gvec4 textureLod (gsampler3D sampler, vec3 P, float lod)
gvec4 textureLod (gsamplerCube sampler, vec3 P, float lod)
float textureLod (sampler1DShadow sampler, vec3 P, float lod)
float textureLod (sampler2DShadow sampler, vec3 P, float lod)
gvec4 textureLod (gsampler1DArray sampler, vec2 P, float lod)
gvec4 textureLod (gsampler2DArray sampler, vec3 P, float lod)
float textureLod (sampler1DArrayShadow sampler, vec3 P, float lod)

Did you try one of those built-in? Also lod has to be a float type. What errors/warning is reporting GLSL compiler?

1
votes

The GLSL 1.20 specification (section 8.7) states that fragment shaders cannot choose their own mipmap level (and that the texture*Lod function are only available in vertex shaders). If anything, you may be able to use the bias parameter to the non-Lod variants to change the mipmap level, but it can only change it relative to what that the card has already calculated for you.

I don't know if latter versions of GLSL may have changed that.

1
votes

try:

glGenerateMipmapEXT(GL_TEXTURE_2D);

after you bind the texture. (And before doing the rendering of course)

The glTexParameteri-GL_GENERATE_MIPMAP is deprecated I think... MfG Digi