In a typical building of a vertex-array-buffer, I am trying to pass an unsigned int attribute along side other classical ones (vertex, normal, texture coordinates...). However the value of this attribute ends up somehow wrong: I am unsure whether the value is wrong or the attribute is simply not set.
Starting from a simple example, say I have defined the following C++ input structure:
struct buffer_data_t
{
glm::vec3 vertex;
glm::vec3 normal;
glm::vec2 texCoords;
};
Preparing my vertex array would look like so:
// Assume this 'shader.attribute(..)' is working and returns the attribute's position
unsigned int shadInputs[] = {
(unsigned int)shader.attribute("VS_Vertex"),
(unsigned int)shader.attribute("VS_Normal"),
(unsigned int)shader.attribute("VS_TexCoords"),
};
glGenBuffers(1, &glBuffer);
glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
glBufferData(GL_ARRAY_BUFFER, vertice.size() * sizeof(buffer_data_t), &vertice[0], GL_STATIC_DRAW);
glGenVertexArrays(1, &glArray);
glBindVertexArray(glArray);
{
glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
glVertexAttribPointer(shadInputs[0], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 0));
glVertexAttribPointer(shadInputs[1], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 1));
glVertexAttribPointer(shadInputs[2], 2, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2));
glEnableVertexAttribArray(shadInputs[0]);
glEnableVertexAttribArray(shadInputs[1]);
glEnableVertexAttribArray(shadInputs[2]);
}
glBindVertexArray(0);
And the vertex shader inputs would be defined like that:
in vec3 VS_Vertex;
in vec3 VS_Normal;
in vec2 VS_TexCoords;
Also, for the sake of my example, let's say that I have a texture sampler in my fragment shader, on which to use those input TexCoords:
uniform sampler2D ColourMap;
Introducing my issue
So far so good, with the code above I can render textured primitives successfully. Now I would like to select different colour maps depending on the face being rendered. To do that, I want to introduce an index as part of the vertice attributes. Changes are:
C++ data structure:
struct buffer_data_t
{
glm::vec3 vertex;
glm::vec3 normal;
glm::vec2 texCoords;
unsigned int textureId; // <---
};
Prepare vertex array:
unsigned int shadInputs[] = {
(unsigned int)shader.attribute("VS_Vertex"),
(unsigned int)shader.attribute("VS_Normal"),
(unsigned int)shader.attribute("VS_TexCoords"),
(unsigned int)shader.attribute("VS_TextureId"),
};
// ...
glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
glVertexAttribPointer(shadInputs[0], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 0));
glVertexAttribPointer(shadInputs[1], 3, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 1));
glVertexAttribPointer(shadInputs[2], 2, GL_FLOAT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2));
glVertexAttribPointer(shadInputs[3], 1, GL_UNSIGNED_INT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2 + sizeof(glm::vec2))); // <---
glEnableVertexAttribArray(shadInputs[0]);
glEnableVertexAttribArray(shadInputs[1]);
glEnableVertexAttribArray(shadInputs[2]);
glEnableVertexAttribArray(shadInputs[3]);
Then the vertex shader defines that new input, and also a 'flat' out
in vec3 VS_Vertex;
in vec3 VS_Normal;
in vec2 VS_TexCoords;
in unsigned int VS_TextureId;
...
out flat unsigned int FS_TextureId;
Fragment shader adjusted to take the input flat and (again for the sake of the example) the colour map is now an array we can pick from:
...
uniform sampler2D ColourMaps[2];
in flat unsigned int FS_TextureId;
...
texture2D(ColourMaps[FS_TextureId], ... );
These changes do not work specifically because of the Vertex shader input attribute 'VS_TextureId'. I was able to prove this (and to find a workaround), by not using the unsigned int type and instead resort to vec2 (or vec3, works either way). That is:
VS:
in vec2 VS_TextureId;
out flat int FS_TextureId;
FS_TextureId = int(VS_TextureId.x);
FS:
in flat int FS_TextureId;
texture2D(ColourMaps[FS_TextureId], ... );
My assumption
I am guessing that this is the line at fault, although I cannot figure how/why:
glVertexAttribPointer(shadInputs[3], 1, GL_UNSIGNED_INT, GL_FALSE, sizeof(buffer_data_t), (void*)(sizeof(glm::vec3) * 2 + sizeof(glm::vec2)));
Note: I checked the result of 'shader.attribute("VS_TextureId")' and it is correct, meaning the attribute in the vertex shader is well defined and found.
Can you see what the problem could be?
FS_TextureId
is not one (not unless every triangle in the rendering command gets the same value). – Nicol Bolas