0
votes

I decided to post this as I now believe the problem isn't simply stemming from the shader program, but most probably the OBJ import and mesh initialization process. I wanted to write a quick Lambert shader to finally get stuff appearing on the screen. The final result is riddled with interesting artifacts and visibility issues:

Poor Bunny - Exhibit A Poor Bunny - Exhibit B It appears as though the vertex positions are encoded correctly, but the either the normals or indices are completely messed up.

Vertex Shader

#version 330

// MeshVertex
in layout(location=0) vec3 a_Position;
in layout(location=1) vec3 a_Normal;
in layout(location=2) vec2 a_UV;
in layout(location=3) vec3 a_Tangent;
in layout(location=4) vec3 a_BiTangent;

uniform mat4 View;
uniform mat4 Projection;
uniform mat4 Model;

out VS_out
{
    vec3 fragNormal;
    vec3 fragPos;
} vs_out;

void main()
{
    mat3 normalMatrix   = mat3(transpose(inverse(Model)));
    vec4 position       = vec4(a_Position, 1.f);

    vs_out.fragPos      = (Model * position).xyz;
    vs_out.fragNormal   = normalMatrix * a_Normal;

    gl_Position = Projection * View * Model * position;
}

I initially thought I was passing the vertex normals incorrectly to the fragment shader. I have seen some samples multiply the vertex position by the ModelView matrix. That sounds non-intuitive to me, my lights are positioned in world space, so I would need the world space coordinates of my vertices, hence the multiplication by the Model matrix only. If there are no red flags in this thought process, here is the fragment shader:

#version 330

struct LightSource
{
    vec3 position;
    vec3 intensity;
};
uniform LightSource light;

in VS_out
{
    vec3 fragNormal;
    vec3 fragPos;
} fs_in;

struct Material
{
    vec4 color;
    vec3 ambient;
};

uniform Material material;

void main()
{
    // just playing around with some values for now, dont worry, removing this still does not fix the issue
    vec3 ambient = normalize(vec3(69, 111, 124));

    vec3 norm       = normalize(fs_in.fragNormal);
    vec3 pos        = fs_in.fragPos;
    vec3 lightDir   = normalize(light.position - pos);

    float lambert       = max(dot(norm, lightDir), 0.0);
    vec3 illumination   = (lambert * light.intensity) + ambient;

    gl_FragColor = vec4(illumination * material.color.xyz, 1.f);
}

Now the main suspicion is how the OBJ is interpreted. I use the tinyOBJ importer for this. I mostly copied the sample code they had on their GitHub page, and initialized my native vertex type using that data.

OBJ Import Code

bool Model::Load(const void* rawBinary, size_t bytes)
{
    tinyobj::ObjReader reader;
    if(reader.ParseFromString((const char*)rawBinary, ""))
    {
        // Fetch meshes
        std::vector<MeshVertex> vertices;
        std::vector<Triangle> triangles;

        const tinyobj::attrib_t& attrib = reader.GetAttrib();
        const std::vector<tinyobj::shape_t>& shapes = reader.GetShapes();

        m_Meshes.resize(shapes.size());
        m_Materials.resize(shapes.size());

        // Loop over shapes; in our case, each shape corresponds to a mesh object
        for(size_t s = 0; s < shapes.size(); s++) 
        {
            // Loop over faces(polygon)
            size_t index_offset = 0;
            for(size_t f = 0; f < shapes[s].mesh.num_face_vertices.size(); f++) 
            {
                // Num of face vertices for face f
                int fv = shapes[s].mesh.num_face_vertices[f];
                ASSERT(fv == 3, "Only supporting triangles for now");

                Triangle tri;

                // Loop over vertices in the face.
                for(size_t v = 0; v < fv; v++) {
                    // access to vertex
                    tinyobj::index_t idx = shapes[s].mesh.indices[index_offset + v];

                    tinyobj::real_t vx = 0.f;
                    tinyobj::real_t vy = 0.f;
                    tinyobj::real_t vz = 0.f;
                    tinyobj::real_t nx = 0.f;
                    tinyobj::real_t ny = 0.f;
                    tinyobj::real_t nz = 0.f;
                    tinyobj::real_t tx = 0.f;
                    tinyobj::real_t ty = 0.f;

                    vx = attrib.vertices[3 * idx.vertex_index + 0];
                    vy = attrib.vertices[3 * idx.vertex_index + 1];
                    vz = attrib.vertices[3 * idx.vertex_index + 2];

                    if(attrib.normals.size())
                    {
                        nx = attrib.normals[3 * idx.normal_index + 0];
                        ny = attrib.normals[3 * idx.normal_index + 1];
                        nz = attrib.normals[3 * idx.normal_index + 2];
                    }

                    if(attrib.texcoords.size())
                    {
                        tx = attrib.texcoords[2 * idx.texcoord_index + 0];
                        ty = attrib.texcoords[2 * idx.texcoord_index + 1];
                    }
                    // Populate our native vertex type
                    MeshVertex meshVertex;
                    meshVertex.Position = glm::vec3(vx, vy, vz);
                    meshVertex.Normal = glm::vec3(nx, ny, nz);
                    meshVertex.UV = glm::vec2(tx, ty);
                    meshVertex.BiTangent = glm::vec3(0.f);
                    meshVertex.Tangent = glm::vec3(0.f);

                    vertices.push_back(meshVertex);
                    tri.Idx[v] = index_offset + v;
                }
                triangles.emplace_back(tri);
                index_offset += fv;

                // per-face material
                //shapes[s].mesh.material_ids[f];
            }

            // Adding meshes should occur here!
            m_Meshes[s] = std::make_unique<StaticMesh>(vertices, triangles);
            // m_Materials[s] = ....
        }
    }

    return true;
}

With the way I understand OBJ, the notion of OpenGL indices does not equate to a Face elements found in the OBJ. This is because each face element has different indices into the position, normal,and texcoord arrays. So instead, I just copy the vertex attributes indexed by the face element into my native MeshVertex structure -- this represents one vertex of my mesh; the corresponding face element ID is then simply the corresponding index for my index buffer object. In my case, I use a Triangle structure instead, but it's effectively the same thing.

The Triangle struct if interested:

    struct Triangle
{
    uint32_t Idx[3];

    Triangle(uint32_t v1, uint32_t v2, uint32_t v3)
    {
        Idx[0] = v1;
        Idx[1] = v2;
        Idx[2] = v3;
    }

    Triangle(const Triangle& Other)
    {
        Idx[0] = Other.Idx[0];
        Idx[1] = Other.Idx[1];
        Idx[2] = Other.Idx[2];

    }

    Triangle()
    {
    }
};

Other than that, I have no idea what can cause this problem, I am open to hearing new thoughts; perhaps someone experienced understands what these artifacts signify. If you want to take a deeper dive, I can post the mesh initialization code as well.

EDIT: So I tried importing an FBX format, and I encountered a very similar issue. I am now considering silly errors in my OpenGL code to initialize the mesh.

This initializes OpenGL buffers based on arbitrary vertex data, and triangles to index by

void Mesh::InitBuffers(const void* vertexData, size_t size, const std::vector<Triangle>& triangles)
{
    glGenVertexArrays(1, &m_vao);
    glBindVertexArray(m_vao);

    // Interleaved Vertex Buffer
    glGenBuffers(1, &m_vbo);
    glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
    glBufferData(GL_ARRAY_BUFFER, size, vertexData, GL_STATIC_DRAW);

    // Index Buffer
    glGenBuffers(1, &m_ibo);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ibo);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Triangle) * triangles.size(), triangles.data(), GL_STATIC_DRAW);

    glBindVertexArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}

Then I setup the layout of the vertex buffer using a BufferLayout structure that specifies the attributes we want.

void Mesh::SetBufferLayout(const BufferLayout& layout)
{
    glBindVertexArray(m_vao);
    glBindBuffer(GL_ARRAY_BUFFER, m_vbo);

    uint32_t stride = layout.GetStride();
    int i = 0;
    for(const BufferElement& element : layout)
    {
        glEnableVertexAttribArray(i);
        glVertexAttribPointer(i++, element.GetElementCount(), GLType(element.Type), element.Normalized, stride, (void*)(element.Offset));
    }

    glBindBuffer(GL_ARRAY_BUFFER, 0);

    glBindVertexArray(0);
}

So in our case, the BufferLayout corresponds to the MeshVertex I populated, containing a Position(float3), Normal(float3), UV(float2), Tangent(float3), BiTangent(float3). I can confirm via debugging that the strides and offsets, and other values coming from the BufferElement are exactly what I expect; so I am concerned with the nature of the OpenGL calls I am making.

1
I haven't touched OpenGL in a while, especially not 3D, so I can't be of much help unfortunately. Just wanted to confirm your suspicion that it has probably to do with the normals, either directly or indirectly (e.g. indices in wrong direction). But that's just a guess of mine. Good luck.Beko
Do you consider, that the indices of the faces of Wavefront .obj files start at 1 rather than 0? Thus you have to subtract 1 from each attribute index specified in a face ("f")Rabbid76
@Rabbid76 I believe the parser already deals with that when it returns index_t structure. Nonetheless I tried subtracting 1 from every indexing operation and that just threw an index out of range error.AAS.N

1 Answers

0
votes

Alright, let us all just forget this has happened. This is very embarrassing, everything was working fine after all. I simply "forgot" to call the following before rendering:

glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);

So understandably, all kinds of shapes were being rendered and culled in completely random fashion. (Why is it not enabled by default?)

Happy Bunny