0
votes

I tried for past week to convert classic tunnel demo effect from different WebGL examples to openFrameroks (using OpenGL GLSL shader). After a lot of research, trials and errors and mainly after reading [this comprehensive tutorial][1] and [this one as well][2], I still do not understand why converted shader is not working. It aprears to be some sort of problem with texture coordinates, at least I think.

As far as my understanding of principle goes, you get pixel values from texture by calculating angle and distance from center and pass them as output. But results are way different. Here is result from [ShaderToy example][3] and [here is example][4] from code I started experimenting with. And here is what I got in openFrameworks: ![enter image description here][5] with this as texture passed to shader: ![enter image description here][6] It seems like that shader is just going through top row of pixels, bacause after a while, screen stays at one color (it like reaching end of the tunnel). I tried use texture just with color noise as pixels and color final stage was exactly yellow, as last pixel in top row. Strange. Hopefully someone can tell me where the problem is.

Here is testApp.cpp

void testApp::setup(){

    texture.loadImage("koalaSQ.jpg"); // 512x512px
    tunnel.load("tunnel.vert", "tunnel.frag");
    projection.set(640, 480);
    projection.setPosition(320, 240, 0);
    projection.setResolution(2,2);
    projection.mapTexCoordsFromTexture(texture.getTextureReference());  

 }

void testApp::draw(){
ofSetColor(255);

if (ofGetKeyPressed('g')) // used just for testing texture
{

    ofBackground(0, 0, 0);
    texture.bind();
    projection.draw();
    texture.unbind();

}else
{
    ofBackground(0, 0, 0);
    tunnel.begin();
    tunnel.setUniform1f("timeE", time/1000);
    tunnel.setUniform2f("resolution", 512,512);
    tunnel.setUniformTexture("tex", texture.getTextureReference(), 0);
    projection.draw();

    tunnel.end();


}

time = ofGetElapsedTimeMillis();

}

and here are shaders:

 // vertex shader - simple pass-though with texcoodrs as output for fragent shader
#version 150
uniform mat4 modelViewProjectionMatrix;
in vec4 position;
in vec2 texcoord;
out vec2 texC;

void main()
{   
    gl_Position = modelViewProjectionMatrix * position;
    texC = texcoord;
}
// ------------------------Fragent shader----------------------
#version 150
precision highp float;
uniform sampler2DRect tex;
uniform float timeE;
uniform vec2 resolution;
in vec2 texC;
out vec4 output;

void main(){

vec2 position = 2.0 * texC.xy / resolution.xy -1.0;
position.x *= resolution.x / resolution.y;

float a = atan(position.y, position.x);
float r = length(position);

vec2 uv = vec2(1/ r + (timeE*5), a/3.1416);
  vec3 texSample = texture(tex, uv).xyz;
 output = vec4(vec3(texSample), 1);
}
1
Sorry for links in the bottom. Seems like StackOverflow is giving you very smart hints as you create your question, but maybe it would be good idea to tell you that you cannot post images and more than 2 links, if your reputation is lower than 10 BEFORE you hit "Post question".sphere42
You say this was originally a WebGL shader? You have committed a serious faux pas if this was copied verbatim, WebGL (OpenGL ES 2.0, technically) is not required to give you highp precision in a fragment shader. You need to check the existence of the pre-processor definition: GL_FRAGMENT_PRECISION_HIGH before declaring anything highp in a fragment shader. Otherwise this shader will fail to compile on some hardware/software implementations.Andon M. Coleman
Nope, that is not causing problem. I cleaned code, it was very messy after many trials and errors, and this line was a leftover. Of course I gave it a go and removed it, but nothing significant changed.sphere42
I was not suggesting this would fix your problem. I was pointing out that it was in fact a problem on its own ;)Andon M. Coleman

1 Answers

2
votes

There are lots of details you did not state (i.e. what values you use for texcoord). However, I think I can still explain why your code results in only accessing the bottom row (you might think of it as the top row when you use an top-to-bottom image array as the image data for glTexImage(), but technically, it will be the bottom row.)

You are using uv as the coords when fetching the texture. And you define uv.y to be atan(position.y, position.x)/3.1415. Since atan() will return a vlue in [-pi, pi], you normalize the coord roughly to the interval [-1,1]. The classical OpenGL texture space is [0,1], where 0 would be the bottom row and 1 the top row. However, you are using sampler2DRect, so you are using rectangle textures. In that case, the tex coords are not normalized, but the pixel coords [0,width] in horizontal direction and [0,height] in vertical direction.

In effect, your texcoords do vary only in the range of two texels. Depending on what wrap (and filter) modes you have set, you should only access the bottom row (or filtering betwenn the two bottom rows) with clapming, or additionaly the top row (or filtering between the two top rows) with GL_REPEAT for the negative parts.