As explained above I would like to render a 3D-Scene onto a 2D-Plane with raytracing. Eventually I would like to use it for Volume Rendering but I'm struggling with the basics here. I have a three.js scene with the viewing plane attached to the camera (in front of it of course).
The Setup:
Then (in the shader) I'm shooting a ray from the camera through each point (250x250) in the plane. Behind the plane is 41x41x41 volume (a cube essentially). If a ray goes through the cube, the point in the viewing plane the ray crossed will be rendered red, otherwise the point will be black. Unfortunately this only works if you look at the cube from the front. Here's the example: http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
If you try to look at the cube from different angles (you can move the camera with the mouse) then we don't get a cube rendered onto the viewing plane as we would like but a square with some weird pixels on the side..
That's the code for Raytracing:
Vertex Shader:
bool inside(vec3 posVec){
bool value = false;
if(posVec.x <0.0 ||posVec.x > 41.0 ){
value = false;
}
else if(posVec.y <0.0 ||posVec.y > 41.0 ){
value = false;
}
else if(posVec.z <0.0 ||posVec.z > 41.0 ){
value = false;
}
else{
value = true;
}
return value;
}
float getDensity(vec3 PointPos){
float stepsize = 1.0;
float emptyStep = 15.0;
vec3 leap;
bool hit = false;
float density = 0.000;
// Ray direction from the camera through the current point in the Plane
vec3 dir = PointPos- camera;
vec3 RayDirection = normalize(dir);
vec3 start = PointPos;
for(int i = 0; i<STEPS; i++){
vec3 alteredPosition = start;
alteredPosition.x += 20.5;
alteredPosition.y += 20.5;
alteredPosition.z += 20.5;
bool insideTest = inside(alteredPosition);
if(insideTest){
// advance from the start position
start = start + RayDirection * stepsize;
hit = true;
}else{
leap = start + RayDirection * emptyStep;
bool tooFar = inside(leap);
if(tooFar){
start = start + RayDirection * stepsize;
}else{
start = leap;
}
}
}
if(hit){
density = 1.000;
}
return density;
}
void main() {
PointIntensity = getDensity(position);
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Fragment Shader:
varying float PointIntensity;
void main() {
//Rays that have traversed the volume (cube) should leave a red point on the viewplane, Rays that just went through empty space a black point
gl_FragColor= vec4(PointIntensity, 0.0, 0.0, 1.0);
}
Full Code: http://pastebin.com/4YmWL0u1
Same Code but Running: http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
I would be very glad if somebody had any tips on what I did wrong here
EDIT:
I updated the example with the changes that Mark Lundin proposed but unfortunately I still only get a red square when moving the camera (no weird pixels on the side though):
mat4 uInvMVProjMatrix = modelViewMatrix *inverseProjectionMatrix;
inverseProjectionMatrix being the Three.js camera property projectionMatrixInverse passed to the shader as a uniform. Then the unproject function gets called for every point in the viewplane with its uv-coordinates.
The new code is here:
and running here:
http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html
To see that the camera is actually moved you can press x, y, z to get the current camera x, y, z coordinate.