2
votes

I'm trying to recover WORLD position of a point knowing it's depth in EYE space, computed as follow (in a vertex shader) :

float depth = - uModelView * vec4( inPos , 1.0 ) ;

where inPos is a point in world space (Obviously, I don't want to recover this particular point, but a point where depth is expressed in that format).

And it's normalized screen position (between 0 and 1), computed as follow (in a fragment shader ) :

vec2 screen_pos = ( vec2( gl_FragCoord.xy ) - vec2( 0.5 ) ) / uScreenSize.xy ;

I can access to the following info :

  • uScreenSize : as it's name suggest, it's screen width and height
  • uCameraPos : camera position in WORLD space

and standard matrices :

  • uModelView : model view camera matrix
  • uModelViewProj : model view projection matrix
  • uProjMatrix : projection matrix

How can I compute position (X,Y,Z) of a point in WORLD space ? (not in EYE space)

I can't have access to other (I can't use near, far, left, right, ...) because projection matrix is not restricted to perspective or orthogonal.

Thanks in advance.

1

1 Answers

4
votes

I get your question right, you have x and y as window space (and already converted to normalized device space [-1,1]), but z in eye space, and want to recosntruct the world space position.

I can't have access to other (I can't use near, far, left, right, ...) because projection matrix is not restricted to perspective or orthogonal.

Well, actually, there is not much besides an orthogonal or projective mapping which can be achieved by matrix multiplication in homogenous space. However, the projection matrix is sufficient, as long as it is invertible (In theory, a projection matrix could transform all points to a plane, line or a single point. In that case, some information is lost and it will never be able to reconstruct the original data. But that would be a very untypical case).

So what you can get from the projection matrix and your 2D position is actually a ray in eye space. And you can intersect this with the z=depth plane to get the point back.

So what you have to do is calculate the two points

vec4 p = inverse(uProjMatrix) * vec4 (ndc_x,  ndc_y, -1, 1);
vec4 q = inverse(uProjMatrix) * vec4 (ndc_x,  ndc_y,  1, 1);

which will mark two points on the ray in eye space. Do not forget to divide p and q by the respective w component to get the 3D coordinates. Now, you simply need to intersect this with your z=depth plane and get the eye space x and y. Finally, you can use the inverse of the uModelView matrix to project that point back to object space.

However, you said that you want world space. But that is impossible. You would need the view matrix to do that, but you have not listed that as a given. All you have is the compisition of the model and view matrix, and you need to know at least one of these to reconstruct the world space position. The cameraPosition is not enoguh. You also need the orientation.