Created 05-11-2016 11:05 AM
Hi,
I am currently using Spark streaming to write to an external hive table every 30 mins.
rdd.toDF().write.partitionBy("dt").options(options).format("orc").mode(SaveMode.Append).saveAsTable("table_name")
The issue with this is it creates lots of small files in HDFS, like so
part-00000 part-00000_copy_1
My table was created with transactions enabled, and I have enabled ACID transactions on the Hive instance however, I can't see any compactions running nor do any get created when I force compaction with ALTER TABLE command. I would expect compaction to run and merge these files as they are very small 200 KB's in size.
Any idea's or help greatly appreciated
Created 05-11-2016 06:11 PM
Hi @Chris McGuire,
Can you please provide an "hdfs dfs -ls -R <table-folder>"
Compaction only operates on tables with delta directories. I suspect that the method you're using (SaveMode.Append) is just appending to the existing partition (or adding a new partition) and not actually creating deltas.
Best,
Eric
Created 05-11-2016 09:59 PM
Thanks @mbalakrishnan, Im currently running Spark Streaming job locally which is writing to the Hive deployed on my cluster. I have added the hive.merge.sparkfiles property. Will this work on files written with the saveAsTable command ?
Created 05-11-2016 11:06 PM
Created 05-12-2016 06:24 PM
Hive Acid tables are not integrated with Spark. To write to an Acid table in a streaming fashion you could use https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-StreamingAPIs
(hdfs dfs -ls -R output shows the table to not be in expected format for Acid table. You can check metastore log for errors regarding compaction, but I would not expect it to work)
Created 09-07-2018 03:08 PM
Compaction works only on transactional table, and to make any table transactional it should meet following properties.
Due you see the last point, you can't run compaction on non transactional table, if you do it from hive you will definitely get error, not sure from spark.