I have a Hive table which has to be updated quite often, hence I created it as a "Transactional Table " in ORC Format. I am trying to create a parquet file from it using the following commands in spark:
val q= spark.sql("select * from hive_table")
I above commands worked for non-transactional tables but don't work for this particular table. I have to have this read into a parquet file because we are using Spark for querying that table (along with many others)
Someone please suggest how to do this. Are there any parameters to be set in Spark? I am not even able to do a simple "select count(*) from db_name.hive_table" query on this table from spark. I can do it in Hive after I set some parameters.