3
votes

I'm following a freetype tutorial on wikibooks on a mac running 10.9 with Xcode 5. I have it running with shader version 120 but I want to use some modern features so I set the SDL hints to OpenGL 3.2 and convert my shaders to 150. The problem is in version 150, the use of texture2D will prevent the shader from compiling.

Here is the version 120 of the shaders:

const GLchar* vTextSource =
"#version 120\n"
"attribute vec4 coord;"
"varying vec2 texcoord;"
"void main(void) {"
"  gl_Position = vec4(coord.xy, 0, 1);"
"  texcoord = coord.zw;"
"}";

const GLchar* fTextSource =
"#version 120\n"
"varying vec2 texcoord;"
"uniform sampler2D tex;"
"uniform vec4 color;"
"void main(void) {"
"  gl_FragColor = vec4(1,1,0, texture2D(tex, texcoord).a) * color;"
"}";

And this is what I have for version 150. The vertex shader builds but the fragment shader fails unless I remove any uses of texture2D. All the CPU code is the same between them.

const GLchar* vTextSource =
"#version 150 \n"
"in vec4 coord;"
"out vec2 texcoord;"
"void main(void) {"
"  gl_Position = vec4(coord.xy, 0, 1);"
"  texcoord = coord.zw;"
"}";

const GLchar* fTextSource =
"#version 150\n"
"uniform sampler2D tex;"
"in vec2 texcoord;"
"uniform vec4 color;"
"out vec4 outColor;"
"void main(void) {"
"  outColor = vec4(1,1,1, texture2D(tex, texcoord).a) * color;"
"}";

Is there something I am missing? Is setting up the texture sampler different in core profile than in compatibility mode?

Edit: I have changed from texture2D(...) to texture(...) in the fragment shader. It compiles now but shows nothing. I'm not sure how to go about debugging this. I've included the texture initialization routine:

void SdlApplication::render_text(const char *text, float x, float y, float sx, float sy) {
    const char *p;
    FT_GlyphSlot g = face->glyph;

    /* Create a texture that will be used to hold one "glyph" */
    GLuint tex;

    glActiveTexture(GL_TEXTURE0);
    glGenTextures(1, &tex);
    glBindTexture(GL_TEXTURE_2D, tex);
    glUniform1i(ogl.uniform_tex, 0);

    /* We require 1 byte alignment when uploading texture data */
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    /* Clamping to edges is important to prevent artifacts when scaling */
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    /* Linear filtering usually looks best for text */
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    /* Set up the VBO for our vertex data */
    glEnableVertexAttribArray(ogl.attribute_coord);
    glBindBuffer(GL_ARRAY_BUFFER, vboText);
    glVertexAttribPointer(ogl.attribute_coord, 4, GL_FLOAT, GL_FALSE, 0, 0);

    /* Loop through all characters */
    for (p = text; *p; p++) {
        /* Try to load and render the character */
        if (FT_Load_Char(face, *p, FT_LOAD_RENDER))
            continue;

        /* Upload the "bitmap", which contains an 8-bit grayscale image, as an alpha texture */
        glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, g->bitmap.width, g->bitmap.rows, 0, GL_ALPHA, GL_UNSIGNED_BYTE, g->bitmap.buffer);

        /* Calculate the vertex and texture coordinates */
        float x2 = x + g->bitmap_left * sx;
        float y2 = -y - g->bitmap_top * sy;
        float w = g->bitmap.width * sx;
        float h = g->bitmap.rows * sy;

        point box[4] = {
            {x2, -y2, 0, 0},
            {x2 + w, -y2, 1, 0},
            {x2, -y2 - h, 0, 1},
            {x2 + w, -y2 - h, 1, 1},
        };

        /* Draw the character on the screen */
        glBufferData(GL_ARRAY_BUFFER, sizeof box, box, GL_DYNAMIC_DRAW);
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

        /* Advance the cursor to the start of the next character */
        x += (g->advance.x >> 6) * sx;
        y += (g->advance.y >> 6) * sy;
    }

    glDisableVertexAttribArray(ogl.attribute_coord);
    glDeleteTextures(1, &tex);
}

Edit 2: Added vao to my vertices setup. I now get squares of solid colors where the text should be. So it seems like the texture coordinates are messed up again.

I added checks after each call and found out I get the 1280 code right after glewInit but not before...

2
How does your vertex state setup look? Are you creating a VAO (Vertex Array Object)? This is required now that you're using the core profile. I would add calls to glGetError() if you haven't tried that yet, and see if and where errors are reported.Reto Koradi
I created the VAO and now I get solid blocks of color! After putting in glGetError everywhere I found out glewInit is the cause, but only when I set to core profile and 3.2 compatibility. Disregarding that, I get another 1280 error after glTexImage2D. Does it have something to do with using alpha or byte instead of a floating point?omikun
The GL_ALPHA texture format is gone in the Core Profile. I put together an answer with that, and summarizing the rest.Reto Koradi

2 Answers

6
votes

The update of your GLSL code to the latest standards looks fine, except for the problem with texture2D(). As was already pointed out, the texture sampling functions are now overloaded, and texture() needs to be used instead of texture2D().

The remaining problems are mostly with updating the code to use the Core Profile, which deprecates many legacy features. Looking at the posted code, this includes:

  • Using VAOs (Vertex Array Objects) is mandatory for setting up vertex state. Use functions like glGenVertexArrays() and glBindVertexArray() to set up a VAO, and make sure that you have it bound while using vertex state setup functions like glVertexAttribPointer() and glEnableVertexAttribArray().

  • The GL_ALPHA texture format is not supported anymore. For using a texture format with a single 8-bit component, use GL_R8 for the internal format, and GL_RED for the format:

    glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, g->bitmap.width, g->bitmap.rows, 0,
                 GL_RED, GL_UNSIGNED_BYTE, g->bitmap.buffer);
    

    This will also require a minor change in the shader code, since the sampling value for the 1-component texture is now in the red component:

    outColor = vec4(1,1,1, texture2D(tex, texcoord).r) * color;
    
3
votes

Instead of using texture2D in your fragment shader, you should be using texture :

void main(void) {
  outColor = vec4(1, 1, 1, texture(tex, texcoord).a) * color;
}