When I'm programming on a normal day, I make sure that all branches are most likely not taken.
int retval = do_somting();
if(!retval) { /* Less-than-likely event*/ }
This optimists branch predictions, causing the CPU's predictor bit(s) set to "do not take". However do the predictor bit(s) get forced back into "take" after a for loop?
// prediction = "likely take"
if(false) { }
// prediction = "probably take"
if(false) { }
// prediction = "probably not take"
if(false) { }
// prediction = "likely not take"
if(false) { }
/* ... thousands of other if(false) that are speedy-fast */
for(int i = 0; i < 5; i++) { }
// prediction = "likely take"?
I know it's an unrealistic and minuscule optimization, but hey, the more you know.
EDIT: Let's assume GCC does not trash all of this code above, and let's also only talk about amd64 architecture. As I did not realize how low-level this question is.