2
votes

Is it correct behavior for BigQuery's TIMESTAMP(<date_string>) function to return epoch time instead of failing the query if the argument provided is not a valid STRING?

For example:

SELECT TIMESTAMP(2015-06-30) as foo FROM [a_table]

..runs as being valid SQL and returns 1970-01-01 00:32:59 UTC

Obviously, the argument is missing the enclosing quotes, so:

SELECT TIMESTAMP('2015-06-30') as foo FROM [a_table]

..now returns 2015-06-30 00:00:00 UTC

Contrastingly, PARSE_UTC_USEC(<date_string>) which also expects a STRING parameter, complains when it's not a proper STRING:

SELECT PARSE_UTC_USEC(2015-06-30) as foo FROM [a_table]

Error: Argument to PARSE_UTC_USEC must have type STRING but was int64

1

1 Answers

1
votes

I just did a lot of testing with this and it seems that indeed the behavior is as you explained it :

by default the TIMESTAMP() seems to take a string, but if it gets an int, it'll treat is as an int (I tried to TIMESTAMP(2011) and it gave me EPOCH + 33 minutes or so (it seems to pad your number)).

In any case, I assume both behaviors should be aligned, and this is not expected in some way. You can always file a defect report or feature request here to get the behavior documented or changed.