0
votes

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:

model

I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.

My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.

So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).

gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();

//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
    direction = gridPoints.geometry.vertices[point].clone();
    vector.subVectors(direction, startPoint);
    ray = new THREE.Raycaster(startPoint, vector.clone().normalize());

    if(ray.intersectObject( defaultMesh ).length > 0){
        gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
    }
}

In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.

So I 2 questions:

  1. Is there a better of way of finding which objects the camera can see?
  2. If not, can I speed up my raycasting/intersection checks?

Thanks in advance!

1
Take a look at this example of GPU picking: threejs.org/examples/webgl_interactive_cubes_gpu.html You could do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.TheJim01
Thanks for the comment @TheJim01, however I'm not sure I understand fully. What do you mean by an off screen render target, and what do you mean by mapping the data back to the spheres? ThanksJoe Morgan
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. Regarding the mapping, you'll parse the render target buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your spheres--and you should know what color each sphere should be by the same color calculation as the shader used. If a sphere's color is in your list of found colors, then that sphere is visible. Does that make sense?TheJim01
Thanks @TheJim01. I've spent the morning implementing a modified version of that example. The problem I'm having is that it's incredibly slow to loop through the render target buffer, as it contains every single pixel on the screen (as opposed to just one pixel under the mouse in that example). Any ideas?Joe Morgan
1) You can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. 2) If you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.TheJim01

1 Answers

1
votes

Take a look at this example of GPU picking.

You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.

WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).

For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.

To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.