2
votes

I'm trying to sort out how can I achieve palette swap using fragment shaders (looking at this post https://gamedev.stackexchange.com/questions/43294/creating-a-retro-style-palette-swapping-effect-in-opengl) I am new to open gl so I'd be glad if someone could explain me my issue.

Here is code snippet which I am trying to reproduce:

http://www.opengl.org/wiki/Common_Mistakes#Paletted_textures

I set up Open GL environment so that I can create window, load textures, shaders and render my single square which is mapped to corners of window (when I resize window image get stretched too).

I am using vertex shader to convert coordinates from screen space to texture space, so my texture is stretched too

attribute vec2 position;

varying vec2 texcoord;

void main()
{
        gl_Position = vec4(position, 0.0, 1.0);
        texcoord = position * vec2(0.5) + vec2(0.5);
}

The fragment shader is

uniform float fade_factor;
uniform sampler2D textures[2];

varying vec2 texcoord;

void main()
{
    vec4 index = texture2D(textures[0], texcoord);
    vec4 texel = texture2D(textures[1], index.xy);
    gl_FragColor = texel;
}

textures[0] is indexed texture (that one I'm trying to colorize)

indexed texture

Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255) - 8 colors total, thats why it looks almost black. I want to encode my colors using value stored in "red channel".

textures[1] is table of colors (9x1 pixels, each pixel has unique color, zoomed to 90x10 for posting)

enter image description here

So as you can see from fragment shader excerpt I want to read index value from first texture, for example (5, 0, 0, 255), and then look up actual color value from pixel stored at point (x=5, y=0) in second texture. Same as written in wiki.

But instead of painted image I get:

result

Actually I see that I can't access pixels from second texture if I explicitly set X point like vec2(1, 0),vec2(2, 0), vec2(4, 0) or vec2(8, 0). But I can get colors when I use vec2(0.1, 0) or vec2(0.7, 0). Guess that happens because texture space is normalized from my 9x1 pixels to (0,0)->(1,1). But how can I "disable" that feature and simply load my palette texture so I could just ask "give me color value of pixel stored at (x,y), please"?

1
Why not just normalise by dividing the coordinate by the width of your lookup texture?JasonD
Possibly thats not enough, I am still getting bad outputNik

1 Answers

2
votes

Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255)

Wrong. Every pixel has the color values: (0, 0, 0, 1), (0.00392, 0, 0, 1), (0.00784, 0, 0, 1) ... (0.0313, 0, 0, 1).

Unless you're using integer or float textures (and you're not), your colors are stored as normalized floating point values. So what you think is "255" is really just "1.0" when you fetch it from the shader.

The correct way to handle this is to first transform the normalized values back into their non-normalized form. This is done by multiplying the value by 255. Then convert them into texture coordinates by dividing by the palette texture's width (- 1). Also, your palette texture should not be 2D:

#version 330 //Always include a version.

uniform float fade_factor;
uniform sampler2D palattedTexture;
uniform sampler1D palette;

in vec2 texcoord;

layout(location = 0) out vec4 outColor;

void main()
{
    float paletteIndex = texture(palattedTexture, texcoord).r * 255.0;
    outColor = texture(palette, paletteIndex / (textureSize(palette).x - 1));
    gl_FragColor = texel;
}

The above code is written for GLSL 3.30. If you're using earlier versions, translate it accordingly.

Also, you shouldn't be using RGBA textures for your paletted texture. It's just one channel, so either use GL_LUMINANCE or GL_R8.