TL;DR, since I got pretty verbose below...
TS_signal1 = int64(634824637219781900); % .NET datetime number
TS_signal2 = 735117.446046926; % MATLAB datetime number
ticks1 = System.DateTime(TS_signal1).AddYears(1).AddDays(2).Ticks;
ticks2 = int64(TS_signal2*1e9)*24*36;
dTicks = ticks1 - ticks2; % Difference in ticks (100 nanoseconds)
dSeconds = double(dTicks)*(1e-7); % Difference in seconds
dSeconds =
2.518352378360000e+04 (just shy of 7 hours)
And now, the explanation...
I'm going to tackle the conversion of each datetime separately, and in a way that maintains as much precision as possible, even if you won't necessarily need it since you're dealing with differences between the two datetimes on the order of hours...
One thing I noticed right away is that your datetime number (6.348246372197819e+17
) is substantially larger than the maximum value a double-precision variable can reliably contain (2^53
, or a little more than 9e15
). Not every integer above that value can be exactly represented, so once you start storing integers larger than that in a double-precision variable you will begin to see significant loss of precision due to round-off to the nearest representable floating-point number.
Since you are converting your number to int64
, that leads me to believe you're storing it as something else prior (like the default double
), and something else just won't do. You want to make sure you have it defined as an int64
from the start, like so:
TS_signal1 = int64(634824637219781900)
Now, one key issue with comparing .NET and MATLAB datetimes (as you pointed out) is they are both measuring different amounts with respect to different reference time points: ticks (in 100 nanosecond units) since 1-JAN-0001
versus fractional days since 0-JAN-0000
, respectively. We need to account for this difference to compare the two numbers. One way to do this is to first add time to the .NET datetime, since the MATLAB datetime has a reference time that is older and its measurements made with respect to that time are that much larger.
So, how much time should we add? At first glance, just subtracting the reference times (1-JAN-0001
minus 0-JAN-0000
) would suggest we add 1 year and 1 day to the .NET datetime so that it is representing the number of ticks from 0-JAN-0000
. This is close, but not quite right. Since 0000
technically counts as a leap year, it has an extra day, so you actually have to add 1 year and 2 days worth of extra ticks to the .NET datetime. We could do this with math, or we can use the System.DateTime
class and a few of its methods to make it quick and easy:
ticks1 = System.DateTime(TS_signal1).AddYears(1).AddDays(2).Ticks;
Now we have the number of ticks with respect to 0-JAN-0000
. We could convert this to seconds to continue our calculations. However, converting to seconds would require changing it to a floating-point representation (i.e. double
) which would then cause a precision loss since our number is still huge. Best to continue the calculations in units of ticks.
Your MATLAB datetime, represented as a serial date number (735117.446046926
), is a floating point value measuring the number of (fractional) days elapsed since 0-JAN-0000
. To compare to our .NET datetime, we need to convert it to ticks, so we should scale it by 24*3600*1e7
(i.e. hours/day times seconds/hour times ticks/second, with a tick being 100 nanoseconds). But there's a problem here. If we apply all this scaling at once, we again overwhelm our double
variable with an integer too big to handle, causing precision loss. But we don't want to convert our double
to an int64
until we have scaled it up enough to get a whole number value, or we risk rounding off fractional information.
The solution is pretty simple: apply as much scaling as we can to get a large integer that is still less than 2^53
, convert to int64
, then apply the remainder of the scaling:
TS_signal2 = 735117.446046926;
ticks2 = int64(TS_signal2*1e9)*24*36;
Putting it all together
We can now compute the difference in ticks (and seconds) between the two time points:
dTicks = ticks1 - ticks2; % Difference in ticks (100 nanoseconds)
dSeconds = double(dTicks)*(1e-7); % Difference in seconds
dSeconds =
2.518352378360000e+04
To confirm, let's convert it to an hh:mm:ss
duration and compare to the date strings:
durSec = seconds(dSeconds); % Convert to a duration...
durSec.Format = 'hh:mm:ss' % ... and format it
durSec =
duration
06:59:43
System.DateTime(TS_signal1).ToString % Convert .NET datetime to a string
ans =
9/5/2012 5:42:01 PM
datetime(TS_signal2, 'ConvertFrom', 'datenum') % Convert MATLAB datenum to a datetime
ans =
datetime
05-Sep-2012 10:42:18
And we can see that the two times do in fact differ by 6 hours, 59 minutes, and 43 seconds. We could have converted the datetime numbers to date strings, then extracted the hours, minutes, and seconds and done some math to get the answer, but we would have lost quite a bit of precision in the process. Computing things in units of ticks as we did above maintained as much precision as possible...
... and isn't it nicer to keep all the precision you can?
TS_signal1
stored as to begin with? If it's stored as a double, then you're going to have precision issues from the get-go. As discussed here, the largest integer that can reliably be stored by a double is2^53
, or a little more than9e15
. An 18 digit time stamp is unlikely to be represented exactly (i.e. there will be a significant amount of round-off to the nearest representable integer value). If it starts as anint64
oruint64
, then you'll be good. – gnovice