1
votes

I'm trying to write a simple maze game, without using any deprecated OpenGL API (i.e. no immediate mode). I'm using one Vertex Buffer Object for each tile I have in my maze, which is essentially a combination of four Vertexs:

class Vertex {
public:
    GLfloat x, y, z; // coords
    GLfloat tx, ty;  // texture coords

    Vertex();
};

and are stored in VBOs like this:

void initVBO()
{
    Vertex vertices[4];
    vertices[0].x = -0.5;
    vertices[0].y = -0.5;
    vertices[0].z = 0.0;
    vertices[0].tx = 0.0;
    vertices[0].ty = 1.0;
    vertices[1].x = -0.5;
    vertices[1].y = 0.5;
    vertices[1].z = 0.0;
    vertices[1].tx = 0.0;
    vertices[1].ty = 0.0;
    vertices[2].x = 0.5;
    vertices[2].y = 0.5;
    vertices[2].z = 0.0;
    vertices[2].tx = 1.0;
    vertices[2].ty = 0.0;
    vertices[3].x = 0.5;
    vertices[3].y = -0.5;
    vertices[3].z = 0.0;
    vertices[3].tx = 1.0;
    vertices[3].ty = 1.0;

    glGenBuffers(1, &vbo);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*4, &vertices[0].x, GL_STATIC_DRAW);

    ushort indices[4];
    indices[0] = 0;
    indices[1] = 1;
    indices[2] = 2;
    indices[3] = 3;

    glGenBuffers(1, &ibo);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(ushort) * 4, indices, GL_STATIC_DRAW);
}

Now, I'm stuck on the camera movement. In a previous version of my project, I used glRotatef and glTranslatef to translate and rotate the scene and then I rendered every tile using glBegin()/glEnd() mode. But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs. Which is the correct way to proceed? Should I loop between every tile modifying the position of the vertices according to the new camera position?

3
Are you using the fixed-function pipeline? Also, there is no concept of a camera in OpenGL. You multiply each vertex with a model-view-projection matrix (where the view matrix is the one you are interested in). You do that in your vertex shader (you can also use the matrix stack if you are allowed to use the fixed function pipeline). And this question may be better suited for gamedev.stackexchange.comSamaursa
@Samaursa As I said, I'm not using the fixed-function pipeline, because from what I've read, it is deprecated. In my render method, I basically bind my VBOs and use the glDrawElements() method to render them. Besides, also the glPopMatrix() and glPushMatrix() methods are deprecated, so I don't really know where to start from.Pietro Lorefice
See my answer (updated with several edits - apologize for that, I forgot to add one of the sections)Samaursa
"I'm using one Vertex Buffer Object for each tile I have in my maze" Stop doing that. You shouldn't make tiny buffer objects like that. Put all the tiles in one buffer.Nicol Bolas

3 Answers

4
votes

But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs.

VBOs have nothing to do with this.

Immediate mode and the matrix stack are two different pairs of shoes. VBOs deal with getting geometry data to the renderer, the matrix stack deals with getting the transformation there. It's only geometry data that's affected by VBOs.

As for your question: You calculate the matrices yourself and pass them to the shader by uniform. It's also important to understand the OpenGL's matrix function never have been GPU accelerated (except for one single machine, SGI's Onyx), so this didn't even offer some performance gain. Actually using OpenGL's matrix stack had a negative impact on overall performance, due to carrying out redundant operations, that have to be done somewhere else in the program as well.

For a simple matrix math library look at my linmath.h http://github.com/datenwolf/linmath.h

2
votes

I will add to datenwolf's answer. I am assuming that only the shader pipeline is available to you.

Requirements

In OpenGL 4.0+ Opengl does not do any rendering for you whatsoever as it moves away from the fixed function pipeline. If you are rendering your geometry without a shader right now you are using the deprecated pipeline. Getting up and running without some base framework will be difficult (not impossible, but I would recommend using a base framework). I would recommend, as a start, using GLUT (this will create a window for you and has basic callbacks for the idle function and input), GLEW (to setup the rendering context) and gLTools (matrix stack, generic shaders and shader manager for a quick setup so that you can at least start rendering).

Setup

I will be giving the important pieces here which you can then piece together. At this point I am assuming you have GLUT set up properly (search for how to set it up) and you are able to register the update loop with it and create a window (that is, the loop which calls one of your selected functions [note, this cannot be a method] every frame). Refer to the link above for help on this.

  • First, initialize glew (by calling glewInit())
  • Setup your scene. This includes using GLBatch class (from glTools) to create a set of vertices to render as triangles and initializing the GLShaderManager class (also from GLTools) and its stock shaders by calling its InitializeStockShaders() function.
  • In your idle loop, call UseStockShader() function of the shader manager to start a new batch. Call the draw() function on your vertex batch. For a complete overview of glTools, go here.
  • Don't forget to clear the window before rendering and swapping the buffers after rendering by calling glClear() and glSwapBuffers() respectively.

Note that most of the functions I gave above accept arguments. You should be able to figure those out by looking at the respective library's documentations.

MVP Matrix (EDIT: Forgot to add this section)

OpenGL renders everything that is in the -1,1 co-ordinates looking down the z-axis. It has no notion of a camera and does not care for anything that falls outside these co-ordinates. The model-view-projection matrix is what transforms your scene to fit these co-ordinates.

As a starting point, don't worry about this until you have something rendered on the screen (make sure all the co-ordinates that you give your vertex batch are less than 1). Once you do, then setup your projection matrix (default projection is orthographic) by using the GLFrustum class in glTools. You will get your projection matrix from this class which you will multiply with your model view matrix. The model-view matrix is a combination of the model's transformation matrix and your camera's transformation (remember, there is no camera, so essentially, you are moving the scene instead). Once you have multiplied all of them to make one matrix, you pass it on to the shader using the UseStockShader() function.

Use a stock shader in GLTools (e.g. GLT_SHADER_FLAT) then start creating your own.

Reference

Lastly, I would highly recommend getting this book: OpenGL SuperBible, Comprehensive Tutorial and Reference (Fifth edition - make sure it is this edition)

1
votes

If you really want to stick with the newest OpenGL API, where lots of features where removed in favor of a programmable pipeline (OpenGL 4 and OpenGL ES 2), you will have to write the vertex and fragment shaders yourself, and implement the transformation stuff there. You will have to manually create all the attributes you use in the shader, specifically the coords and texture coords you use in your example. You will also need 2 uniform variables, one for model-view matrix and other for projection matrix, if you want to mimic the behavior of old fixed functionality OpenGL.

The rotation/translation you are used to do are matrix operations. During vertex transformation stage of the pipeline, now performed by the vertex shader you supply, you must multiply a 4x4 transformation matrix by the vertex position (4 coordinates, interpreted as a 4x1 matrix, where the 4th coordinate is usually 1 if you are not doing anything too fancy). There resulting vector will be at the correct relative position according to the that transformation. Then you multiply the projection matrix to that vector, and the output the result to the fragment shader.

You can learn how all those matrices are built by looking at the documentation of glRotate, glTranslate and gluPerspective. Remember matrix multiplication is non-comutative, so the order you multiply them matters (this is exactly why the order you call glRotate and glTranslate also matters).

About learning GLSL and how to use shaders, I've learned on here, but these tutorials relates to OpenGL 1.4 and 2 and are now very old. The main difference is that the predefined input variables to the vertex shaders, such as gl_Vertex and gl_ModelViewMatrix no longer exists, and you have to create them by yourself.