While playing recently with clock()
in order to time the performance of different kinds of code and algorithms, I found an annoying bug. clock()
just can’t register work that has taken less than 0.01 seconds. This is pretty unexpected as clock()
should return the processor time used by the program. The man
page for clock()
states:
The clock() function returns an approximation of processor time used by the program.
I thought that the word approximation was there because that the CLOCKS_PER_SEC
should always be set to 1,000,000 according to the POSIX standard, and modern CPU tick on a much higher frequency so here goes the approximation, as the number of actual ticks needs to be adjusted, as the tick rate of clock()
is a one million in a second ticks and not the actual hardware rate.
But the word “approximation” is there for different reason. clock()
just can’t register very short processor times. Let’s examine the following program:
#include <stdio.h>
#include <time.h>
int main()
{
clock_t start, end;
int temp,i;
start = clock();
for (i = 0; i<2500000 ; i++)
temp+=temp;
end = clock();
printf("%i\n",end-start);
return 0;
}
If I compile and run this program it prints 1000
that means that the clock()
has done 1000 ticks, if we divide it by CLOCKS_PER_SEC
we get that the processor time for computing the loop in the program was 0.01 seconds.
Now if I change a bit the loop and make it a bit shorter by changing the top limit from 2,500,000 to 2,000,000 and compile and run it, the output is 0
. Suddenly the loop doesn’t take any processor time!
That means that also CLOCKS_PER_SEC
is set for 1,000,000 the actual resolution that one can depend on is less than 1000 ticks per second, which is pretty low. This disqualifies clock()
from being used in any task that demands precision.
Possible alternatives for clock()
that have higher resolution include clock_gettime
and gettimeofday()
(despite its name). Both functions provide high-resolution clocks that should give back the time in microseconds, which allows to use them in a much similar way (but hopefully more accurate) than clock()
The performance of clock() can be even worse, with some people reporting 50ms, try this example:
double freq[ITERATIONS];
for (int i = 0; i (freq, ITERATIONS);
double min_freq = Min(freq, ITERATIONS);
double max_freq = Max(freq, ITERATIONS);
printf(“\n\n avg_freq = %.5f\n min_freq = %.5f\n max_freq = %.5f “, avg_freq, min_freq, max_freq);
on athlon XP 1.7GHz its:
avg_freq = 0.01011
min_freq = 0.01000
max_freq = 0.10000
100ms : ) in the worst case
Thanks! I will recommend this to all my friends.
Interesting, looks like things improved over the years.
I have a c11 implementation and the clock() is pretty accurate on a modern processor. The test I did showed, the accuracy is always below 1 ms.
neo