2
votes

I have 3d-scene with a lot of simple objects (may be huge number of them), so I think it's not very good idea to use ray-tracing for picking objects by mouse.

I'd like to do something like this:

  1. render all these objects into some opengl off-screen buffer, using pointer to current object instead of his color

  2. render the same scene onto the screen, using real colors

  3. when user picks a point with (x,y) screen coordinates, I take the value from the off-screen buffer (from corresponding position) and have a pointer to object

Is it possible? If yes- what type of buffer can I choose for "drawing with pointers"?

1
Are they moving (alot)? If they aren't then you might be better off using a datastructure to quickly find the approximate area where you are picking. This way you can handle large amounts of objects with little performance hit. Also keep in mind that if you don't use Mouse-picking every frame, you can easily multithread the picking and avoid the performance issue almost entirely. (a user might not care about a 100ms delay between click and pick)Full Frontal Nudity
There is a huge difference between ray-tracing and ray-casting (performance wise). This is an application of ray-casting, you are not going to "trace" the ray as it bounces off of multiple surfaces or passes through different materials. Use a spatial partitioning data structure (chances are, you will already have such a data structure in your scene to begin with) to accelerate the the initial ray-cast (reduce the set of data to test against) and you should be good to go. The only time you'd really want to do a pixel readback is if you need pixel-perfect selection, it will add a lot of latency.Andon M. Coleman

1 Answers

0
votes

I suppose you can render in two passes. First to a buffer or a texture data you need for picking and then on the second pass the data displayed. I am not really familiar with OGL but in DirectX you can do it like this: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics16.html. You could then find a way to analyse the texture. Keep in mind that you are rendering data twice, which will not necessarily double your render time (as you do not need to apply all your shaders and effects) bud it will be increased quite a lot. Also per each frame you are essentially sending at least 2MB of data (if you go for 1byte per pixel on 2K monitor) from GPU to CPU but that might change if you have more than 256 objects on screen.

Edit: Here is how to do the same with OGL although I cannot verify that the tutorial is correct: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/ (There is also many more if you look around on Google)