2
votes

In a typical building of a vertex-array-buffer, I am trying to pass an unsigned int attribute along side other classical ones (vertex, normal, texture coordinates...). However the value of this attribute ends up somehow wrong: I am unsure whether the value is wrong or the attribute is simply not set.

Starting from a simple example, say I have defined the following C++ input structure:

struct buffer_data_t
{
    glm::vec3 vertex;
    glm::vec3 normal;
    glm::vec2 texCoords;
};

Preparing my vertex array would look like so:

// Assume this 'shader.attribute(..)' is working and returns the attribute's position
unsigned int shadInputs[] = {
    (unsigned int)shader.attribute("VS_Vertex"),
    (unsigned int)shader.attribute("VS_Normal"),
    (unsigned int)shader.attribute("VS_TexCoords"),
};

glGenBuffers(1, &glBuffer);
glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
glBufferData(GL_ARRAY_BUFFER, vertice.size() * sizeof(buffer_data_t), &vertice[0], GL_STATIC_DRAW);
glGenVertexArrays(1, &glArray);
glBindVertexArray(glArray);
{
    glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
    glVertexAttribPointer(shadInputs[0], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 0));
    glVertexAttribPointer(shadInputs[1], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 1));
    glVertexAttribPointer(shadInputs[2], 2, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2));
    glEnableVertexAttribArray(shadInputs[0]);
    glEnableVertexAttribArray(shadInputs[1]);
    glEnableVertexAttribArray(shadInputs[2]);
}
glBindVertexArray(0);

And the vertex shader inputs would be defined like that:

in vec3 VS_Vertex;
in vec3 VS_Normal;
in vec2 VS_TexCoords;

Also, for the sake of my example, let's say that I have a texture sampler in my fragment shader, on which to use those input TexCoords:

uniform sampler2D ColourMap;

Introducing my issue

So far so good, with the code above I can render textured primitives successfully. Now I would like to select different colour maps depending on the face being rendered. To do that, I want to introduce an index as part of the vertice attributes. Changes are:

C++ data structure:

struct buffer_data_t
{
    glm::vec3 vertex;
    glm::vec3 normal;
    glm::vec2 texCoords;
    unsigned int textureId; // <---
};

Prepare vertex array:

unsigned int shadInputs[] = {
    (unsigned int)shader.attribute("VS_Vertex"),
    (unsigned int)shader.attribute("VS_Normal"),
    (unsigned int)shader.attribute("VS_TexCoords"),
    (unsigned int)shader.attribute("VS_TextureId"),
};

// ...

glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
glVertexAttribPointer(shadInputs[0], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 0));
glVertexAttribPointer(shadInputs[1], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 1));
glVertexAttribPointer(shadInputs[2], 2, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2));
glVertexAttribPointer(shadInputs[3], 1, GL_UNSIGNED_INT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2 + sizeof(glm::vec2))); // <---
glEnableVertexAttribArray(shadInputs[0]);
glEnableVertexAttribArray(shadInputs[1]);
glEnableVertexAttribArray(shadInputs[2]);
glEnableVertexAttribArray(shadInputs[3]);

Then the vertex shader defines that new input, and also a 'flat' out

in vec3 VS_Vertex;
in vec3 VS_Normal;
in vec2 VS_TexCoords;
in unsigned int VS_TextureId;

...

out flat unsigned int FS_TextureId;

Fragment shader adjusted to take the input flat and (again for the sake of the example) the colour map is now an array we can pick from:

...
uniform sampler2D ColourMaps[2];
in flat unsigned int FS_TextureId;
...
texture2D(ColourMaps[FS_TextureId], ... );

These changes do not work specifically because of the Vertex shader input attribute 'VS_TextureId'. I was able to prove this (and to find a workaround), by not using the unsigned int type and instead resort to vec2 (or vec3, works either way). That is:

VS:

in vec2 VS_TextureId;
out flat int FS_TextureId;
FS_TextureId = int(VS_TextureId.x);

FS:

in flat int FS_TextureId;
texture2D(ColourMaps[FS_TextureId], ... );

My assumption

I am guessing that this is the line at fault, although I cannot figure how/why:

glVertexAttribPointer(shadInputs[3], 1, GL_UNSIGNED_INT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2 + sizeof(glm::vec2)));

Note: I checked the result of 'shader.attribute("VS_TextureId")' and it is correct, meaning the attribute in the vertex shader is well defined and found.

Can you see what the problem could be?

1
FYI: indices to arrays of opaque types (like samplers) must be dynamically uniform values. And FS_TextureId is not one (not unless every triangle in the rendering command gets the same value).Nicol Bolas

1 Answers

2
votes

If you want to specify array of attributes, with an integral data type, then you've to use glVertexAttribIPointer (focus on I in the middle of the function name), rather than glVertexAttribPointer.

See OpenGL 4.6 API Core Profile Specification; 10.2. CURRENT VERTEX ATTRIBUTE VALUES; page 348

The VertexAttribI* commands specify signed or unsigned fixed-point values that are stored as signed or unsigned integers, respectively. Such values are referred to as pure integers.

...

All other VertexAttrib* commands specify values that are converted directly to the internal floating-point representation.

This means the specification of the vertex attributes has to be:

//glVertexAttribPointer(shadInputs[3], 1, GL_UNSIGNED_INT, GL_FALSE, 
//    sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2 + sizeof(glm::vec2))); 
glVertexAttribIPointer(shadInputs[3], 1, GL_UNSIGNED_INT,
    sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2 + sizeof(glm::vec2)));

In general what you try to a chive doesn't work like this. The index to an array of samplers has to be "dynamically uniform". This means the index has to be the "same" for all fragments (e.g. a constant or a uniform variable).

See GLSL 4.60 Specification - 4.1.7. Opaque Types (page 33)

Texture-combined sampler types are opaque types, declared and behaving as described above for opaque types. When aggregated into arrays within a shader, they can only be indexed with a dynamically uniform integral expression, otherwise results are undefined. [...]

I recommend to use a single TEXTURE_2D_ARRAY texture, rather than an array of TEXTURE_2D textures. See Texture.
In this case you can use 3 dimensional floating point texture coordinate.