Usual way is to store same normal vector for all 3 vertices of a face. Something like this:
Vertex
{
Vector3 position;
Vector3 normal;
}
std::vector<Vertex> vertices;
std::vector<uint32_t> indices;
for(each face f)
{
Vector3 faceNormal = CalculateFaceNormalFromPositions(f); // Generate normal for given face number `f`;
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v); // Load position from OBJ based on face index (f) and vertex index (v);
vertex.normal = faceNormal;
vertices.push_back(vertex);
indices.push_back(GetPosIndex()); // only position index from OBJ file needed
}
}
Note: typically you will want to use vertex normals instead of face normals, because vertex normals allows better-looking lighting algorithms to be applied (per-pixel lighting):
for(each face f)
{
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v);
vertex.normal = ...precalculated somewhere...
vertices.push_back(vertex);
}
}
Note2: typically you will want to read pre-calculated normals from asset file instead of calculating it in runtime:
for(each face f)
{
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v);
vertex.normal = LoadNormal(f, v);
vertices.push_back(vertex);
}
}
.obj format allows storing per-vertex normals). Example from google:
# cube.obj
#
g cube
# positions
v 0.0 0.0 0.0
v 0.0 0.0 1.0
v 0.0 1.0 0.0
v 0.0 1.0 1.0
v 1.0 0.0 0.0
v 1.0 0.0 1.0
v 1.0 1.0 0.0
v 1.0 1.0 1.0
# normals
vn 0.0 0.0 1.0
vn 0.0 0.0 -1.0
vn 0.0 1.0 0.0
vn 0.0 -1.0 0.0
vn 1.0 0.0 0.0
vn -1.0 0.0 0.0
# faces: indices of position / texcoord(empty) / normal
f 1//2 7//2 5//2
f 1//2 3//2 7//2
f 1//6 4//6 3//6
f 1//6 2//6 4//6
f 3//3 8//3 7//3
f 3//3 4//3 8//3
f 5//5 7//5 8//5
f 5//5 8//5 6//5
f 1//4 5//4 6//4
f 1//4 6//4 2//4
f 2//1 6//1 8//1
f 2//1 8//1 4//1
Example code in C++ (not tested)
struct Vector3{ float x, y, z; };
struct Face
{
uint32_t position_ids[3];
uint32_t normal_ids[3];
};
struct Vertex
{
Vector3 position;
Vector3 normal;
};
std::vector<Vertex> vertices; // Your future vertex buffer
std::vector<uint32_t> indices; // Your future index buffer
void ParseOBJ(std::vector<Vector3>& positions, std::vector <Vector3>& normals, std::vector<Face>& faces) { /*TODO*/ }
void LoadOBJ(const std::wstring& filename, std::vector<Vertex>& vertices, std::vector<uint32_t>& indices)
{
// after parsing obj file
// you will have positions, normals
// and faces (which contains indices for positions and normals)
std::vector<Vector3> positions;
std::vector<Vector3> normals;
std::vector<Face> faces;
ParseOBJ(positions, normals, faces);
for (auto itFace = faces.begin(); itFace != faces.end(); ++itFace) // for each face
{
for (uint32_t i = 0; i < 3; ++i) // for each face vertex
{
uint32_t position_id = itFace->position_ids[i]; // just for short writing later
uint32_t normal_id = itFace->normal_ids[i];
Vertex vertex;
vertex.position = positions[position_id];
vertex.normal = normals[normal_id];
indices.push_back(position_id); // Note: only position's indices
vertices.push_back(vertex);
}
}
}
Note, that after merging normal's data inside vertex you will not need normal's indices anymore. Thus, normals becomes not indexed (and two equal normals can be stored in different vertices, which is a waste of space). But you can still use indexed rendering, because positions are indexed.
I must say, that, of course, programmable pipeline of modern GPUs allows more tricky things to do:
- create buffers for each: positions, normals, pos_indices and nor_indices
- using current vertex_id, read in shader current position_id and corresponding position, normal_id and corresponding normal
- you can even generate normals in shader (thus no need of normal_id and normal buffers at all)
- you can assemble your faces in geometry shader
- ...another bad ideas here =)
In such algorithms rendering system becomes more complex with almost no gain.