4
votes

I am trying to ray trace a sphere inside a cube. The cube is simply constructed out of 12 triangles with normals.

The cube has unit coordinates and unit normals. So within its local space (between -1 and 1), there should be a sphere of radius 0.5.

So I thought I should calculate the ray in the vertex shader: the ray origin is the interpolated vertex position, the ray direction is the vertex normal (or its opposite direction but that shouldn't matter I think). Interpolation should do the rest.

Then in the fragment shader, I should then calculate the ray-sphere intersection points and if there is any, change the color of the fragment.

On the front and back side of the cube, the result seems to be correct, but on the left, right, top and bottom sides, the result seems to be coming from the wrong angle. I should see the sphere in the middle all the time and that is not the case on those sides.

Can someone tell me what I am doing wrong?

Here is the shader code:

Vertex shader:

#version 400

layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aNor;

uniform mat4 uProj;
uniform mat4 uView;
uniform mat4 uModel;

out vec3 vRayPos;
out vec3 vRayDir;

void main(void)
{
  gl_Position = uProj * uView * uModel * vec4(aPos, 1);
  vRayPos = aPos;
  vRayDir = inverse(mat3(uModel)) * aNor;
}

Fragment shader:

#version 400

in vec3 vRayPos;
in vec3 vRayDir;

out vec4 oFrag;

void main(void)
{
  const vec3 sphereCenter = vec3(0, 0, 0);
  const float sphereRadius = 0.5;

  vec3 rayPos = vRayPos;
  vec3 rayDir = normalize(vRayDir);
  float a = dot(rayDir, rayDir); // TODO: rayDir is a unit vector, so: a = 1.0?
  float b = 2 * dot(rayDir, (rayPos - sphereCenter));
  float c = dot(rayPos - sphereCenter, rayPos - sphereCenter) - sphereRadius * sphereRadius;
  float d = b * b - 4 * a * c;
  float t = min(-b + sqrt(max(0, d)) / 2, -b - sqrt(max(0, d)) / 2);

  vec3 color = (1.0 - step(0, d)) * vec3(0.554, 0.638, 0.447) + step(0, d) * abs(t) * vec3(0.800, 0.113, 0.053);

  oFrag = vec4(color, 1);
}

Notes: The factor t is actually not necessary, but it gives an idea of how far away from the side the ray touches the sphere which gives it a shady look. The step(0, d) function is used to see if there are any intersection points and the max(0, d) is used to prevent the shader from halting on a sqrt(<0) fault, both to prevent code branching.

Reference: I got the calculations from https://en.wikipedia.org/wiki/Line%E2%80%93sphere_intersection

Edit: here is a video of the problem: Video

1
this Reflection and refraction impossible without recursive ray tracing? might interest you. Anyway unit sphere has radius 1.0 instead of 0.5 You can cross check your intersection with this ray and ellipsoid intersection accuracy improvement. Anyway do you have screenshots of the problem? Also your ray direction/poisition description sounds off to me look at the first link How I do it (Quad covering whole screen,position is the Vertex position but direction is position-focus instead of normal !) - Spektre
I never said I wanted a unit sphere. I know it's only a half sphere so it would not touch the edges of the cube. I will look into those links you provided in more detail but a quick look doesn't seem to help. The ray pos/dir are not the usual kind where you render a big screen filling quad (I have done that a lot before). This is different: I have a scene inside a cube that is rendered in the usual way. Every side of the cube is the camera lens itself viewing in the opposite direction of the normal of that side. But I agree that there is probably something wrong in that part of my code. - scippie
I added a video in my OP! - scippie
Oh that's entirely different than I taught. Anyway for the stuff you're doing vRayDir = inverse(mat3(uModel)) * aNor; look suspicious. I would use your cube center and translate it by cube half size + focal length in normal direction to get the focal point and then vRayDir = vRayPos - focal_point ... - Spektre
I never said that but I understand where that thought may have come from. What I said was that I can't use simple billboards for the 3D particles. But if I am reading your comment correctly, I think it may still be possible: keep the billboard unrotated and change its viewport angle. Thanks. Cool video by the way! - scippie

1 Answers

2
votes

Your rays should be calculated by taking the direction between a given fragment and the camera position. (In view space, that would be the origin.) The vertex normals have absolutely nothing to do with it.

You can technically calculate rays in the vertex shader and pass it to the fragment shader as an interpolant. However, this has the potential to give incorrect results since the output will be linear, which is incorrect.

A better approach is to output the view space position of your vertex in the vertex shader. In the fragment shader, calculate a ray from the origin to the fragment's view space position. Then, perform your ray intersection tests using that ray. The rasterizer will correctly interpolate the view space position. You could also calculate that yourself in the fragment shader, but the hardware is pretty good at this so it makes sense to let it do that for you.

Having said all of that, the major issue with your current implementation is using the vertex normals to calculate rays. That's wrong. All you need is the camera position and the fragment position. If you look carefully at your video, you'll see that the same thing is being drawn on all sides, regardless of position relative to the camera.

For a simple sphere, all you need is the camera-fragment ray. Calculate the distance from the line containing that to the center of the sphere. If it's less than the radius of the sphere, it's a hit.