I'm writing a screensaver with a bouncing ball (x and y, does not bounce in Z) in C++ using OpenGL. When this ball touches the edges of the screen, a small patch of damage will appear on the ball. (When the ball is damaged enough, it will explode.) Finding the part of the ball to damage is the easy part when the ball isn't rotating.
The algorithm I decided for this is that I'm keeping the position left most, right most, top most and bottom most vertex. For every collision, I obviously need to know which screen edge it hit. Before the ball could roll, when it touches a screen edge, if it hit the left screen edge, I know the left-most vertex is the point on the ball that took a hit. From there, I get all vertices that are within d distance from that point. I don't need the actual vertex that was hit, just the point on the ball's surface.
Doing this, I don't need to read all vertices, translate them by the x,y position of the ball and see which are off-screen. Doing this would solve all my problems but would be slow as hell.
Currently, the ball's rotation is controlled by pitch, yaw and roll. The problem is, what point on the ball's outer surface has touched the edge of the screen given my yaw, pitch and roll angles? I've looked into keeping an up, right and direction vector but I'm totally new to this and as someone might notice, totally lost. I've read the rotation matrix article on Wikipedia several times and still drawing a blank. If I got rid of one rotation angle it would be much simpler but I would prefer not to.