For timing an algorithm (approximately in ms), which of these two approaches is better:
clock_t start = clock();
algorithm();
clock_t end = clock();
double time = (double) (end-start) / CLOCKS_PER_SEC * 1000.0;
Or,
time_t start = time(0);
algorithm();
time_t end = time(0);
double time = difftime(end, start) * 1000.0;
Also, from some discussion in the C++ channel at Freenode, I know clock has a very bad resolution, so the timing will be zero for a (relatively) fast algorithm. But, which has better resolution time() or clock()? Or is it the same?