1
votes

I'm trying to show a texture(yes it is a pot) with opengl 2.1 and glsl 120, but i'm not sure on how to do it, all i can get is a black quad, i've been following this tutorials: A Textured Cube, OpenGl - Textures and what i have understood is that i need to:

  1. Specify the texture coordinates to attach to each vertex(in my case are 6 vertices, a cube without indexing)
  2. Load the texture and bind it in a texture unit(default is 0)
  3. call glDrawArrays

  4. Inside the shaders i need to:

  5. Receive the texture coords in an attribute in the vertex shader and pass it to the fragment shader through a varying variable
  6. In the fragment shader use a sampler object to sample a pixel, in the position specified by the varying variable, from the texture.

Is it all correct?

Here is how i create the texture VBO and load the texture:

void Application::onStart(){
unsigned int format;
SDL_Surface* img;

float quadCoords[] = {
        -0.5f, -0.5f, 0.0f,
        0.5f, -0.5f, 0.0f,
        0.5f, 0.5f, 0.0f,
        0.5f, 0.5f, 0.0f,
        -0.5f, 0.5f, 0.0f,
        -0.5f, -0.5f, 0.0f};

const float texCoords[] = {
        0.0f, 0.0f, 
        1.0f, 0.0f, 
        1.0f, 1.0f, 
        1.0f, 1.0f, 
        0.0f, 1.0f,
        0.0f, 0.0f};

//shader loading omitted ...
sprogram.bind(); // call glUseProgram(programId)
//set the sampler value to 0 -> use texture unit 0
sprogram.loadValue(sprogram.getUniformLocation(SAMPLER), 0);

//quad
glGenBuffers(1, &quadBuffer);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*18, quadCoords, GL_STATIC_DRAW);
//texture
glGenBuffers(1, &textureBuffer);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*12, texCoords, GL_STATIC_DRAW);

//load texture
img = IMG_Load("resources/images/crate.jpg");

if(img == nullptr)
    throw std::runtime_error(SDL_GetError());

glGenTextures(1, &this->texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->w, img->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, img->pixels);

SDL_FreeSurface(img);
}

rendering phase:

glClear(GL_COLOR_BUFFER_BIT);

glEnableVertexAttribArray(COORDS);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glVertexAttribPointer(COORDS, 3, GL_FLOAT, GL_FALSE, 0, nullptr);

glEnableVertexAttribArray(TEX_COORDS);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glVertexAttribPointer(TEX_COORDS, 2, GL_FLOAT, GL_FALSE, 0, nullptr);

//draw the vertices
glDrawArrays(GL_TRIANGLES, 0, 6);

vertex shader:

#version 120

attribute vec3 coord;
attribute vec2 texCoord;

varying vec2 UV;

void main(){    
    gl_Position = vec4(coord.x, coord.y, coord.z, 1.0);
    UV = texCoord;
}

fragment shader:

#version 120

uniform sampler2D tex;
varying vec2 UV;

void main(){
    gl_FragColor.rgb = texture2D(tex, UV).rgb;
    gl_FragColor.a = 1.0;
}

I know that the tutorials use out instead of varying so i tried to "convert" the code, also there is this tutorial: Simple Texture - LightHouse that explain the gl_MultiTexCoord0 attribute and gl_TexCoord array wich are built in, but this is almost the same thing i'm doing. I want to know if 'm doing it all right and if not, i would like to know how to show a simple 2d texture in the screen with opengl 2.1 and glsl 120

1
Where's the part where you bind the texture to the sampler?user253751
@immibis, before binding my texture i call glActiveTexture(GL_TEXTURE0) wich is the default sampler in the shadersPaulo Marcio
Not sure if this is your entire code, but you don't seem to set the texture filters. Default minification filters are with mipmapping, and your texture is not mipmap-complete (hence sampling won't work).derhass
@dehass, That was it, i added the glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); and glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); and it is working now, i was really trying to do a minimalist texture renderer test that i decided to take this function calls off. Thanks for the solutionPaulo Marcio

1 Answers

0
votes

Do you have a particular reason to use opengl 2.1 with glsl version 1.2 ? If not stick to the openGl 3.0 because its easier to understand imho. My guess is you have 2 big problems :

First of all getting a black quad: If its size occupies your hole app then its the background color. That means it doesn't draw anything at all . I think(by testing this) OpenGL has a default program which will activate and even if you have already set a vertex array/buffer object on the gpu.It should render as a white quad in your window... So that might be ur 1st problem . I dont know if opengl 2.1 has vertex buffer arrays but opengl 3.0 has and you should definetly make use of that!

Second : you don't use your shader program in the rendering phase; Call this function before drawing your quad:

glUseProgram(myProgram); // The myProgram variable is your compiled shader program

If by any chance you would like me to explain how to draw your quad using OpegGL 3.0 ++ let me know :) ...It is not far from what you already wrote in your code .