3
votes

I'm trying to implement a nearest neighbor search for points using OpenGL and GLSL shaders. The NN calculation works correctly and the result is drawn into a texture of size 1024x1024 (using a viewport of the current screen size). The result simply contains a vec4 holding the position of the neighbor.

Now the important part is: The texel holding the vec4 is located exactly where the point is projected to (the point for which I am searching for neighbors). So in theory, to access the neighbor of an arbitrary point, I project its world location to screen coordinates and use these to access the texture (e.g. using texelFetch).

This works if I do the point projection in a vertex shader and by using gl_FragCoord to access the texture in my fragment program. But now I have a new situation where the points are only available in the fragment shader (accessed through a texture/buffer), and therefore I have to calculate the screen position manually.

I tried the following to calculate gl_FragCoord on my own, but it doesn't work (blank results only):

vec4 pointPos = ... //texture lookup
vec4 transformedPos = matProjectionOrtho * pointPos;
transformedPos.xy /= transformedPos.w;
transformedPos.xy = transformedPos.xy * 0.5f + 0.5f;
transformedPos.xy = vec2(transformedPos.x * textureWidth, 
                         transformedPos.y * textureHeight);

The projection matrix matProjectionOrtho is the same for all rendering passes, simply an orthogonal projection. textureWidth and textureHeight are the size of the texture holding the neighbor data (usually 1024x1024).

Is this calculation of the screen/texture position correct?

1
There is a bit of a problem with the way your question is phrased, however... if you were going to project a world position, then you should be multiplying by a view matrix in addition to your projection matrix.Andon M. Coleman
@AndonM.Coleman The projection should be view independent, that's why no view matrix is used. PS: I'm no native English speaker, sorry for the bad phrasing. Still practicing :-)bender
It is impossible for the projection to be view independent. You need an orientation in order to decide what to project onto your image plane. World-space coordinates do not have said orientation. If you said that your coordinates were in view-space, that would make a lot more sense to me.Andon M. Coleman
There is a good visual representation of what I mean here. Textbook GL calls view-space eye-space, but they are two words for the same thing... you might also see it called camera-space sometimes.Andon M. Coleman

1 Answers

0
votes

Is this calculation of the screen/texture position correct?

What is your viewport? That looks proper assuming the viewport has the same size as your texture (which you have already stated) and critically, contains no offset (e.g. its origin is 0,0).

The only really iffy thing here is that to use texelFetch (...) you need integer coordinates, transformedPos is a floating-point vector. GLSL does not define implicit conversion from vecN to ivecN, so you cannot use the coordinates you just calculated directly - you will have to construct an ivec yourself.

Something to the effect:

ivec2 texel_coords = ivec2 (transformedPos.x, transformedPos.y);

Fortunately because texels are centered at i+0.5 rather than i+0.0, when you convert the coordinates from floating-point to integer, the fact that they are truncated turns out not to matter in this case. That is to say pixel coordinate 511.9 is obviously closer to 512 than 511. If texels were centered on integer boundaries, then the fact that 511.9 becomes 511 when converted to an integer would really mess with things when you try to find the nearest neighbor.