I tried for past week to convert classic tunnel demo effect from different WebGL examples to openFrameroks (using OpenGL GLSL shader). After a lot of research, trials and errors and mainly after reading [this comprehensive tutorial][1] and [this one as well][2], I still do not understand why converted shader is not working. It aprears to be some sort of problem with texture coordinates, at least I think.
As far as my understanding of principle goes, you get pixel values from texture by calculating angle and distance from center and pass them as output. But results are way different. Here is result from [ShaderToy example][3] and [here is example][4] from code I started experimenting with. And here is what I got in openFrameworks: ![enter image description here][5] with this as texture passed to shader: ![enter image description here][6] It seems like that shader is just going through top row of pixels, bacause after a while, screen stays at one color (it like reaching end of the tunnel). I tried use texture just with color noise as pixels and color final stage was exactly yellow, as last pixel in top row. Strange. Hopefully someone can tell me where the problem is.
Here is testApp.cpp
void testApp::setup(){
texture.loadImage("koalaSQ.jpg"); // 512x512px
tunnel.load("tunnel.vert", "tunnel.frag");
projection.set(640, 480);
projection.setPosition(320, 240, 0);
projection.setResolution(2,2);
projection.mapTexCoordsFromTexture(texture.getTextureReference());
}
void testApp::draw(){
ofSetColor(255);
if (ofGetKeyPressed('g')) // used just for testing texture
{
ofBackground(0, 0, 0);
texture.bind();
projection.draw();
texture.unbind();
}else
{
ofBackground(0, 0, 0);
tunnel.begin();
tunnel.setUniform1f("timeE", time/1000);
tunnel.setUniform2f("resolution", 512,512);
tunnel.setUniformTexture("tex", texture.getTextureReference(), 0);
projection.draw();
tunnel.end();
}
time = ofGetElapsedTimeMillis();
}
and here are shaders:
// vertex shader - simple pass-though with texcoodrs as output for fragent shader
#version 150
uniform mat4 modelViewProjectionMatrix;
in vec4 position;
in vec2 texcoord;
out vec2 texC;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
texC = texcoord;
}
// ------------------------Fragent shader----------------------
#version 150
precision highp float;
uniform sampler2DRect tex;
uniform float timeE;
uniform vec2 resolution;
in vec2 texC;
out vec4 output;
void main(){
vec2 position = 2.0 * texC.xy / resolution.xy -1.0;
position.x *= resolution.x / resolution.y;
float a = atan(position.y, position.x);
float r = length(position);
vec2 uv = vec2(1/ r + (timeE*5), a/3.1416);
vec3 texSample = texture(tex, uv).xyz;
output = vec4(vec3(texSample), 1);
}
highp
precision in a fragment shader. You need to check the existence of the pre-processor definition:GL_FRAGMENT_PRECISION_HIGH
before declaring anythinghighp
in a fragment shader. Otherwise this shader will fail to compile on some hardware/software implementations. – Andon M. Coleman