0
votes

I am sampling the system time at 500ms intervals using std::chrono. On each sample, I subtract the current time from the last time, and accumulate the result. However after 1hr, the total accumulated time is about 50ms faster for an hour. Does anybody know the reason?The code of get system time as below:

void serverTimeResponse(long serverTime) {        
     long localTime =(long)std::chrono::duration_cast<std::chrono::microseconds(now.time_since_epoch()).count();        
      long duration = localtime - serverTime;        
    writeToCsvFile(duration); 

} 

The serverTimeResponse will be called every 500ms because server will return by socket, I record the duration into csv file, after 1 hour, I see the last row duration is more 50ms than the first row duration in csv file. I doubt whether the chrono library has some issue for get the system time.

Ran on ubuntu 18.04

1
Please show a minimal reproducible example, what is now? How do you know it's changed by 50ms?Alan Birtles
What are you using to measure system time? Or are you just looking at the journal to catch the systemd timer that synchronizes the clock?David C. Rankin
@AlanBirtles I get a standard time from the server every 500ms. When this time reaches the local time, I record the current local time, then subtract the two times to get a difference. The difference kept growing, increasing by about 50ms per hour.Eric
@AlanBirtles, I have update some code in question, please look at it, thanksEric

1 Answers

0
votes

I can understand what you are trying to achieve, but you may be running into issues relating to the OS's thread scheduling. You have not produced a minimal reproducible example, so I assume you are using eg:

std::this_thread::sleep_for(500ms);

If this is the case, then you are at the mercy of the OS to sleep for the correct amount of time.

Every 500ms you sample the current time and subtract the last time, this sounds fine. However, when you sleep for 500ms, the OS schedules the program to be woken up sometime around that number. Remember, OS's are NOT REAL TIME. They have no guarantees that your application will sleep for EXACTLY 500ms.

All of these small sleep discrepancies of time between your samples will add up. Below are some calculations:

You are sampling every 500ms for 1hr:

1000ms * 60s * 60m = 3600000ms  // 1hr in milliseconds
3600000ms / 500ms = 7200        // 7200 samples @ 500ms each
50ms / 7200 =  0.006944444ms    // or 6.944444us error per sample

As you can see, there is a tiny error of 6.9us. So the OS almost slept for 500ms, but didn't quite. These small errors add up, and there isn't a whole lot you can do when it comes to EXACT sleeps.