|1 down vote favorite||
We use Spark to flatten out clickstream data and then write the same to S3 in ORC+zlib format, I have tried changing many settings in Spark but still the resultant stripe sizes of the ORC files getting created are very small (<2MB)
Things which I tried so far to decrease the stripe size,
Earlier each file was 20MB in size, using coalesce I am now creating files which are of 250-300MB in size, but still there are 200 stripes per file i.e each stripe <2MB in size
Tried using hivecontext instead of sparkcontext by setting hive.exec.orc.default.stripe.size to 67108864, but spark isn't honoring these parameters.
So, Any idea on how can I increase the stripe sizes of ORC files being created ? because the problem with small stripes is , when we are querying these ORC files using Presto and when stripe size is less than 8MB, then Presto will read the whole data file instead of the selected fields in the query.
Presto Stripe issue related thread: https://groups.google.com/forum/#!topic/presto-users/7NcrFvGpPaA
I'm using Hive version:2.3.4 and Spark: 2.4.4 with Hadoop: 2.8.5 but still my pyspark code is not taking my Stripe size parameter for ORC creation. I have posted a new question this community as well.
Could you please advise on this.
Thanks Dongjoon for the reply. But what about the people who doesn't use HDP? Is there any open JIRA where some one is working on integrating latest version of Hive with Spark , if you are aware of any such thread , can you please share that link ?
If you can wait for it, Apache Spark 2.3 will be released with Apache ORC 1.4.1.
There are many ORC patch in Hive. Apache Spark cannot sync it promptly.
So, in Apache Spark, we decide to use the latest ORC 1.4.1 library instead of upgrading Hive 1.2.1 library.
From Apache Spark 2.3, Hive ORC table is converted into ORC data sources tables by default and uses ORC 1.4.1 library to read it.
Not only your issue but also vectorization on ORC are supported.
Anyway, again, HDP 2.6.3+ is already shipped with ORC 1.4.1 with vectorization, too.
@Dongjoon Hyun Just want to check if the ORC library version change i.e to ORC 1.4.1 is getting picked or not as part of Spark 2.3 release, I have gone through the PR's under SPARK-20901, but I didn't find any conversation related to ORC library upgrade
In SPARK-20901 `Feature Parity for ORC with Parquet`, you can see the issue links marked as `is blocked by`. Among them, the following issues are what you want to see for ORC library,
- SPARK-21422 Depend on Apache ORC 1.4.0
- SPARK-22300 Update ORC to 1.4.1
In addition to that, the following will convert Hive ORC table into Spark data sources tables to use Apache ORC 1.4.1.
- SPARK-22279 Turn on spark.sql.hive.convertMetastoreOrc by default