The goal is to get a point in 3D space projected on the cameras screen (the final goal is to produce a points cloud)
This is the setup:
Camera postion: px,py,pz up direction: ux,uy,uz look_at_direction: lx,ly,lz screen_width screen_height screen_dist
Point in space: p = (x,y,z)
And
init U,V,W
w = || position - look_at_direction || = || (px,py,pz) - (ux,uy,uz) ||
u = || (ux,uy,uz) cross-product w ||
v = || w cross-product u ||
This gives me the u,v,w coords of the camera.
This are the steps I'm supposed to implement, and the way I understand them. I believe something was lost in the translation.
(1) Find the ray from the camera to the point
I subtract the point with the camera's position
ray_to_point = p - position
(2) Calculate the dot product between the ray to the point and the normalized ray to the center of the screen (the camera direction). Divide the result by sc_dist. This will give you a ratio between the distance of the point and the distance to the camera's screen.
ratio = (ray_to_point * w)/screen_dist
Here I'm not sure I should be using w or the original look at value of the camera or the w vector which is the unit vector in the camera's space.
(3) Divide the ray to the point by the ratio found in step 2, and add it to the camera position. This will give you the projection of the point on the camera screen. point_on_camera_screen = ray_to_point / ratio
(4) Find a vector between the center of the camera's screen and the projected point. Am I supposed to find the center pixel of the screen? How can I do that?
Thanks.