2
votes

Just learning about uniform buffers and how they work. It seems they share similarities to vertex buffers with regard to their attribute und buffer bindings.
As I don't want to hard code the buffer binding points directly into glBindBufferBase or glUniformBlockBinding, and for future automatic shader composition, I try to combine vertex shader buffers and uniform buffers into a single concept for a buffer-class in C++.

Here is my mental model of it right now:

enter image description here

Is this correct view of both:
attribute-locations, vertex buffers and binding index
&
uniform locations, uniform buffers and binding points
?

1
"I try to combine vertex shader buffers and uniform buffers into a single buffer-class" You really shouldn't do that. A buffer object is not, and should not be, directly associated with the kind of data that it contains.Nicol Bolas
Also, why does uniform C, whose location is 1, point to binding point 2?Nicol Bolas
@NicolBolas Well, I wasn't sure about combining them, they just look kind of the same to me (as a beginner), the only difference are the calles openGl functions. I was referring to a representation in C++ as a single concept of a 'buffer', I do not intend to combine vertex buffers and uniform buffers into one in OpenGL. Your 2nd comment: uniform C has the layout C, and the associated buffer object with layout C has the binding index 2.nonsensation
OK, I just remembered that uniform buffers don't have locations at all. They have a block index and a binding index, but no "location". So what does your "location" column actually mean?Nicol Bolas
@NicolBolas Yes, the block index in the shader (similar to the location index for attributes). They are all integers and I either can manually set them in the shader or via code. Besides their name 'location/block index' there seems to be no difference? Thats what I was asking, vertex buffers and uniform buffers build upon the same concept (as in the image in the post) and only differ in their OpenGL calls somewhatnonsensation

1 Answers

2
votes

The basic facts of your analogy are broadly accurate (assuming we're using vertex attrib binding). A vertex attribute in a GLSL has a location which ultimately maps to a buffer binding index, to which a buffer object is bound. A uniform buffer in GLSL has a block index which ultimately is associated with a buffer binding index, to which a buffer object is bound.

But there are too many differences in the details of these things to try to form any kind of structural basis for treating these constructs the same way in a formalized object.

Consider the distinction between a vertex attribute index and a block index. Yes, these are similar in some respects. However, the way you interact with them is very different. Block indices are assigned by the GLSL compiler. Attribute indices are assigned by the user in code, either via layout(location) in the shader or via glBindAttribLocation before compiling the shader.

To put simply, you query block indices; you assign vertex attribute locations.

Now if you want to get technical, the GLSL compiler will assign every attribute a unique location upon compiling the shader, if you did not assign those locations in some other way. And thus, you can query locations.

But that brings us to another difference in how you interact with them. Block indices don't mean anything; they are purely a numerical identifier for a particular uniform block within a shader. A block index is only useful with respect to a specific shader.

This is not the case of an attribute location. The association between attribute locations and the buffers they pull from is contained within an object separate from the actual shader: a vertex array object. This means that an attribute location, as specified by a shader, must also match the attribute format as specified in a VAO for everything to work.

So the scope of an attribute location is not bound to just one shader. It must agree with any VAOs it intends to be used with. Or more importantly, a particular VAO's attribute locations must match any shaders that this VAO will be used with.

Changing vertex formats is a fairly expensive operation in many GPUs. So keeping the same vertex format (which includes the location-to-binding association) would be a good thing. This means that it's reasonable to have multiple shaders that all use the same VAO. You would change which buffers are bound to render different objects (which is a faster state change than the format), but you wouldn't invoke glBindVertexArray between such objects.

To make that work, all of the shaders for those objects must use the same locations for their attributes. But they wouldn't necessarily have the same block indices, even if they are using the same uniform block definitions.

Uniform block index values are arbitrary; locations are not. This is why you can assign locations but not block indices.

This is also why you can assign UBO binding indices in the shader (via layout(binding=#)), but you cannot assign attribute binding indices from the shader. The shader doesn't control the binding index; the VAO does.

In fact, the ability to assign a binding index to a UBO in a shader makes it possible to essentially eliminate the block index from consideration. You can develop a known set of binding indices which have well-defined meaning. Index 0 may be for per-scene data (camera, perspective matrices, etc), index 1 could be for per-object data, index 2 could be for lighting, index 3 for arrays of bone matrices, etc.

But a similar heuristic for attributes uses locations, not vertex buffer binding indices. Location 0 could be for positions, location 1 for normals, location 2 for colors, etc.

So from this perspective, attribute locations can be seen as more similar to UBO binding indices from the perspective of the GLSL code we use to talk to them. This despite the fact that you don't bind buffers to attribute locations.

Also look at the difference in how you interact with buffers at bind time. UBO buffer bindings are explicitly ranged. You may use glBindBufferBase to bind the entirety of a buffer, but this is not the expected usage. Having a bunch of small buffer objects isn't a good idea in general. And if you're using UBOs to store per-object data, you probably only want to map a single buffer, transfer all of the object data at once, and then use it, rather than having to bind different buffers over and over again.

So normal usage of the UBO binding API will be to bind appropriate sub-ranges of a small set of buffers to a particular binding point.

By contrast, vertex array buffer binding is unbound. You provide a starting offset, but there is no upper bound on the range. A rendering call could fetch from any byte within the buffer's storage (after the offset).

This is important, as features like instancing and base vertex rendering can allow you to store multiple objects in the same buffer without having to bind new buffers just to render a different object. Vertex buffer binding, while not terribly expensive, is not the cheapest thing in the world, and if it's reasonable to avoid it, you should.

Here's another difference. The format of data stored in a UBO is ultimately defined by the shader itself. Your C++ code must provide data that matches exactly the layout as defined by the UBO block definitions in the GLSL shader code.

By contrast, the format of the data stored in a vertex buffer is largely defined by the VAO, not the shader. An attribute of type vec4 could be getting its data from groups of 4 floats, from 4 normalized unsigned bytes that get converted to floats, from 4 non-normalized signed shorts that get converted numerically to floats, or many other alternatives.

So again, while there are similar concepts being employed, the interaction with them is very different. So trying to build a system where these two mechanisms are made similar by the structure of your code would be inappropriate.