Question 1: Does a cpu or a gpu has any other way to evaluate time than by counting cycles?
Different hardware may provide different facilities. For example, x86 PCs have employed several hardware facilities for timing: for the last decade or so x86 CPUs have Time Stamp Counters operating at their processing frequency or - more recently - some fixed frequency (a "constant rate" aka "invariant" TSC); there may be a High Precision Event Timer, and going back further there were Programmable Interrupt Timers (https://en.wikipedia.org/wiki/Programmable_interval_timer).
If that is the case, because the way a computer count cycles will never be as precise as an atomic clock, it means that a "second" (period = std::ratio<1>) for a computer can be actually shorter or bigger than an actual second, causing differences in the long run for time measurements between the computer clock and let's say GPS.
Yes, a computer without an atomic clock (they're now available on a chip) isn't going to be as accurate as an atomic clock. That said, services such as Network Time Protocol allow you to maintain tighter coherence across a bunch of computers. It is sometimes aided by use of Pulse Per Second (PPS) techniques. More modern and accurate variants include Precision Time Protocol (PTP) (which can often achieve sub-microsecond accuracy across a LAN).
Question 3: Is the "cycle count" measured by cpu and gpus varying depending on the hardware frequency?
That depends. For TSC, newer "constant rate" TSC implementations don't vary, others do vary.
If yes, then how std::chrono deal with it?
I'd expect most implementations to call an OS provided time service, as the OS tends to have best knowledge of and access to the hardware. There are a lot of factors that need to be considered - e.g. whether the TSC readings are in sync across cores, what happens if the PC goes into some kind of sleep mode, what manner of memory fences are desirable around the TSC sampling....
If not, what does a cycle correspond to (like what is the "fundamental" time)?
For Intel CPUs, see this answer.
Is there a way to access the conversion at compile-time? Is there a way to access the conversion at runtime?
std::chrono::duration::count
exposes raw tick counts for whatever time source was used, and you can duraction_cast
to other units of time (e.g. seconds). C++20 is expected to introduce further facilities like clock_cast
. AFAIK, there's no constexpr
conversion available: seems dubious too if a program might end up running on a machine with a different TSC rate than the machine it was compiled on.