1
votes

I got two times tamps and I want to know the difference of them in seconds. Both time stamp are close to

5-9-2012 17:42:01

within hours (7 hours).

From signal 1 we have an 18 digit time stamp (6.348246372197819e+17). The time stamp is counted in 100 Nanoseconds from 1.JAN.0001. I guess it is .NET times tamp (If someone knows the specific name of that time stamp that would be great). (To view the timestr from the stamp:)

TS1=System.DateTime(int64(TS_signal1));
TS1.ToString

From signal 2 we have a 16 digit timestamp (735117.446046926). (I guess serial date number from Matlab datenum). If I am not mistaken, Datenum is in fraction days from 0.JAN.0000.

Time stamp 1 can easily converted into seconds by multiplying it with 10^-7 (100Ns resolution).

Time stamp 2 is converted by multiplying it with 24*3600, correct? (Which resolution has datenum [10ms?] ? After the multiplication the serial date number is a float: 63514147338.4544)

Now, the problem is that the time stamps do not start counting at the same time. Also I have no info on the 18 digit time stamp (leap seconds etc.) I tried to get the difference by creating a datenum time stamp of the known date string from the 18 digit signal (The date shown at the top). Then I subtracted both on seconds level. The difference was 21427199.0218048

Calculation:

test=datenum('5-9-2012 17:42:01')% use date of 18 digit for datenum
test=test*24*3600                % convert to seconds
test-timestamp_18digits*10^(-7) 

The Problem is now if I add this to the signal 2 datenum time stamp and subtract the signal 1 18digit*10^-7 time stamp I get a huge difference of 10281617.4543686 seconds while it should be 7hours + some seconds.

Calculation: `

 ...; % Get Signal 1 and 2 timestamps

TS_signal2_s=timeStartCam*24*3600+7*3600; 
TS_signal2_s=TS_signal2_s+21427199.0218048; % adding the difference of the time stamps

offset_signals =TS_signal2_s-TS_signal1*(10^-7);%100ns`

An anyone hint me to the error I am making?

I already tried a lot. Any help is appreciated. Thanks a lot in advance.

p.s. I cannot answer fast. Please be patient.

1
I see one problem to start: what is TS_signal1 stored as to begin with? If it's stored as a double, then you're going to have precision issues from the get-go. As discussed here, the largest integer that can reliably be stored by a double is 2^53, or a little more than 9e15. An 18 digit time stamp is unlikely to be represented exactly (i.e. there will be a significant amount of round-off to the nearest representable integer value). If it starts as an int64 or uint64, then you'll be good.gnovice
Thanks for that advice. I will store it as int64 from now on. While it should not be a problem as I want the precision only on a second level (~9e10).Florida Man

1 Answers

2
votes

TL;DR, since I got pretty verbose below...

TS_signal1 = int64(634824637219781900);  % .NET datetime number
TS_signal2 = 735117.446046926;           % MATLAB datetime number
ticks1 = System.DateTime(TS_signal1).AddYears(1).AddDays(2).Ticks;
ticks2 = int64(TS_signal2*1e9)*24*36;
dTicks = ticks1 - ticks2;          % Difference in ticks (100 nanoseconds)
dSeconds = double(dTicks)*(1e-7);  % Difference in seconds

dSeconds =
     2.518352378360000e+04  (just shy of 7 hours)

And now, the explanation...

I'm going to tackle the conversion of each datetime separately, and in a way that maintains as much precision as possible, even if you won't necessarily need it since you're dealing with differences between the two datetimes on the order of hours...

The .NET DateTime

One thing I noticed right away is that your datetime number (6.348246372197819e+17) is substantially larger than the maximum value a double-precision variable can reliably contain (2^53, or a little more than 9e15). Not every integer above that value can be exactly represented, so once you start storing integers larger than that in a double-precision variable you will begin to see significant loss of precision due to round-off to the nearest representable floating-point number.

Since you are converting your number to int64, that leads me to believe you're storing it as something else prior (like the default double), and something else just won't do. You want to make sure you have it defined as an int64 from the start, like so:

TS_signal1 = int64(634824637219781900);

Now, one key issue with comparing .NET and MATLAB datetimes (as you pointed out) is they are both measuring different amounts with respect to different reference time points: ticks (in 100 nanosecond units) since 1-JAN-0001 versus fractional days since 0-JAN-0000, respectively. We need to account for this difference to compare the two numbers. One way to do this is to first add time to the .NET datetime, since the MATLAB datetime has a reference time that is older and its measurements made with respect to that time are that much larger.

So, how much time should we add? At first glance, just subtracting the reference times (1-JAN-0001 minus 0-JAN-0000) would suggest we add 1 year and 1 day to the .NET datetime so that it is representing the number of ticks from 0-JAN-0000. This is close, but not quite right. Since 0000 technically counts as a leap year, it has an extra day, so you actually have to add 1 year and 2 days worth of extra ticks to the .NET datetime. We could do this with math, or we can use the System.DateTime class and a few of its methods to make it quick and easy:

ticks1 = System.DateTime(TS_signal1).AddYears(1).AddDays(2).Ticks;

Now we have the number of ticks with respect to 0-JAN-0000. We could convert this to seconds to continue our calculations. However, converting to seconds would require changing it to a floating-point representation (i.e. double) which would then cause a precision loss since our number is still huge. Best to continue the calculations in units of ticks.

The MATLAB serial date number

Your MATLAB datetime, represented as a serial date number (735117.446046926), is a floating point value measuring the number of (fractional) days elapsed since 0-JAN-0000. To compare to our .NET datetime, we need to convert it to ticks, so we should scale it by 24*3600*1e7 (i.e. hours/day times seconds/hour times ticks/second, with a tick being 100 nanoseconds). But there's a problem here. If we apply all this scaling at once, we again overwhelm our double variable with an integer too big to handle, causing precision loss. But we don't want to convert our double to an int64 until we have scaled it up enough to get a whole number value, or we risk rounding off fractional information.

The solution is pretty simple: apply as much scaling as we can to get a large integer that is still less than 2^53, convert to int64, then apply the remainder of the scaling:

TS_signal2 = 735117.446046926;
ticks2 = int64(TS_signal2*1e9)*24*36;

Putting it all together

We can now compute the difference in ticks (and seconds) between the two time points:

dTicks = ticks1 - ticks2;          % Difference in ticks (100 nanoseconds)
dSeconds = double(dTicks)*(1e-7);  % Difference in seconds

dSeconds =
     2.518352378360000e+04

To confirm, let's convert it to an hh:mm:ss duration and compare to the date strings:

durSec = seconds(dSeconds);  % Convert to a duration...
durSec.Format = 'hh:mm:ss'   % ... and format it

durSec = 
  duration
   06:59:43

System.DateTime(TS_signal1).ToString  % Convert .NET datetime to a string

ans = 
9/5/2012 5:42:01 PM

datetime(TS_signal2, 'ConvertFrom', 'datenum')  % Convert MATLAB datenum to a datetime

ans = 
  datetime
   05-Sep-2012 10:42:18

And we can see that the two times do in fact differ by 6 hours, 59 minutes, and 43 seconds. We could have converted the datetime numbers to date strings, then extracted the hours, minutes, and seconds and done some math to get the answer, but we would have lost quite a bit of precision in the process. Computing things in units of ticks as we did above maintained as much precision as possible...

... and isn't it nicer to keep all the precision you can?