In hive, when one casts a bigint to a timestamp, the bigint is considered to be milliseconds. Oddly, when casting a timestamp to a bigint the bigint is considered to be seconds.
In impala, casts in both directions consider the bigint to be seconds. This is an improvement in that it is self consistent. It is a regression in that it doesn’t match hive. Is this difference documented anywhere? I could not find it.
I want to convert from a timestamp to a bigint where the bigint is in milliseconds. The best that I can come up with is:
unix_timestamp(time)*1000+extract(millisecond from time)
extract(epoch from time)*1000+extract(millisecond from time)