They are the same function being used in two different ways. I'd explain why it works this way, but you won't care, and it's for very stupid and irrelevant reasons.
What matters is what they're doing. And what they're doing depends on something that you didn't show: what is bound to GL_ARRAY_BUFFER
.
See, the behavior of glVertexAttribPointer
changes depending on that. If there is no buffer object bound to GL_ARRAY_BUFFER
when you call glVertexAttribPointer
, then the function will assume that the final value is a pointer (like the function's name says: glVertexAttribPointer). Specifically, it is a pointer into client-owned memory.
When it comes time to render, the vertex attribute data will come from the previously provided pointer. Thus, the second example is just using an array of client data, declared in standard C style, as the source data. No buffer objects are involved.
Note: the core profile of OpenGL 3.1+ removed the ability to use client memory; there, you must use buffer objects, as explained below.
If a buffer object is bound to GL_ARRAY_BUFFER
when glVertexAttribPointer
is called, then something special happens. OpenGL will pretend that the pointer (which is what the final parameter is as far as C/C++ is concerned) is actually a byte offset into the buffer bound to GL_ARRAY_BUFFER
. It will convert the pointer into an integer and then store that integer offset and the buffer object currently bound to GL_ARRAY_BUFFER
.
So the above code takes 3*sizeof(GLfloat)
, the byte offset, and converts it into a pointer. OpenGL will take the pointer and convert it back into an offset, yielding 3*sizeof(GLfloat)
again.
When it comes time to render, OpenGL will then read from the previously given buffer object, using the previously given offset.
The first example puts the vertex data into a buffer object in GPU memory. The second example puts the vertex data in a regular C/C++ array, in CPU memory.