Created 11-01-2017 06:28 PM
Hi team,
I am looking to convert a unix timestamp field to human readable format.
Can some one help me in this.
I am using from unix_timestamp('Timestamp', "yyyy-MM-ddThh:mm:ss"), but this is not working.
Any suggestions would be of great help
Created 11-03-2017 04:34 AM
For regular unix timestamp field to human readable without T in it is lot simpler as you can use the below conversion for that.
pyspark >>> hiveContext.sql("select from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SSS')").show(truncate=False) +-----------------------+ |_c0 | +-----------------------+ |2017-11-02 21:35:16.000| +-----------------------+
pyspark >>>hiveContext.sql("select from_unixtime(cast(<unix-timestamp-column-name> as bigint),'yyyy-MM-dd HH:mm:ss.SSS')")
But you are expecting format as yyyy-MM-ddThh:mm:ss
For this case you need to use concat date and time with T letter
pyspark >>>hiveContext.sql("""select concat(concat(substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),1,10),'T'),substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),12))""").show(truncate=False) +-----------------------+ |_c0 | +-----------------------+ |2017-11-02T21:35:16.00| +-----------------------+
Your query:-
pyspark >>>hiveContext.sql("""select concat(concat(substr(cast(from_unixtime(cast(<unix-timestamp-column-name> as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),1,10),'T'), substr(cast(from_unixtime(cast(<unix-timestamp-column-name> as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),12))""").show(truncate=False) //replace <unix-timestamp-column-name> with your column name
in case if you want to test in hive then use the below query
hive# select concat(concat(substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SSS') as string),1,10),'T'), substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SSS') as string),12)); +--------------------------+--+ | _c0 | +--------------------------+--+ | 2017-11-02T21:35:16.00 | +--------------------------+--+
Hope this will help to resolve your issue...!!!
Created 11-03-2017 04:34 AM
For regular unix timestamp field to human readable without T in it is lot simpler as you can use the below conversion for that.
pyspark >>> hiveContext.sql("select from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SSS')").show(truncate=False) +-----------------------+ |_c0 | +-----------------------+ |2017-11-02 21:35:16.000| +-----------------------+
pyspark >>>hiveContext.sql("select from_unixtime(cast(<unix-timestamp-column-name> as bigint),'yyyy-MM-dd HH:mm:ss.SSS')")
But you are expecting format as yyyy-MM-ddThh:mm:ss
For this case you need to use concat date and time with T letter
pyspark >>>hiveContext.sql("""select concat(concat(substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),1,10),'T'),substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),12))""").show(truncate=False) +-----------------------+ |_c0 | +-----------------------+ |2017-11-02T21:35:16.00| +-----------------------+
Your query:-
pyspark >>>hiveContext.sql("""select concat(concat(substr(cast(from_unixtime(cast(<unix-timestamp-column-name> as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),1,10),'T'), substr(cast(from_unixtime(cast(<unix-timestamp-column-name> as bigint),'yyyy-MM-dd HH:mm:ss.SS') as string),12))""").show(truncate=False) //replace <unix-timestamp-column-name> with your column name
in case if you want to test in hive then use the below query
hive# select concat(concat(substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SSS') as string),1,10),'T'), substr(cast(from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SSS') as string),12)); +--------------------------+--+ | _c0 | +--------------------------+--+ | 2017-11-02T21:35:16.00 | +--------------------------+--+
Hope this will help to resolve your issue...!!!