Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark ORC Stripe Size

avatar
1 down vote favorite

We use Spark to flatten out clickstream data and then write the same to S3 in ORC+zlib format, I have tried changing many settings in Spark but still the resultant stripe sizes of the ORC files getting created are very small (<2MB)

Things which I tried so far to decrease the stripe size,

Earlier each file was 20MB in size, using coalesce I am now creating files which are of 250-300MB in size, but still there are 200 stripes per file i.e each stripe <2MB in size

Tried using hivecontext instead of sparkcontext by setting hive.exec.orc.default.stripe.size to 67108864, but spark isn't honoring these parameters.

So, Any idea on how can I increase the stripe sizes of ORC files being created ? because the problem with small stripes is , when we are querying these ORC files using Presto and when stripe size is less than 8MB, then Presto will read the whole data file instead of the selected fields in the query.

Presto Stripe issue related thread: https://groups.google.com/forum/#!topic/presto-users/7NcrFvGpPaA

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Hi, @Rajiv Chodisetti .

It's related to HIVE-13232 (fixed in Hive 1.3.0, 2.0.1, 2.1.0), but all Apache Spark still uses Hive 1.2.1 library.

Could you try HDP 2.6.3+ (2.6.4 is the latest one). HDP Spark 2.2 has that fixed hive library.

View solution in original post

9 REPLIES 9

avatar
Expert Contributor

Hi, @Rajiv Chodisetti .

It's related to HIVE-13232 (fixed in Hive 1.3.0, 2.0.1, 2.1.0), but all Apache Spark still uses Hive 1.2.1 library.

Could you try HDP 2.6.3+ (2.6.4 is the latest one). HDP Spark 2.2 has that fixed hive library.

avatar
New Contributor

Hi,

I'm using Hive version:2.3.4 and Spark: 2.4.4 with Hadoop: 2.8.5 but still my pyspark code is not taking my Stripe size parameter for ORC creation. I have posted a new question this community as well.

https://community.cloudera.com/t5/Support-Questions/Unable-to-set-stripe-size-for-the-orc-file-using...

 

Could you please advise on this.

Thanks,

Sai

avatar

Thanks Dongjoon for the reply. But what about the people who doesn't use HDP? Is there any open JIRA where some one is working on integrating latest version of Hive with Spark , if you are aware of any such thread , can you please share that link ?

avatar
Expert Contributor

If you can wait for it, Apache Spark 2.3 will be released with Apache ORC 1.4.1.

There are many ORC patch in Hive. Apache Spark cannot sync it promptly.

So, in Apache Spark, we decide to use the latest ORC 1.4.1 library instead of upgrading Hive 1.2.1 library.

From Apache Spark 2.3, Hive ORC table is converted into ORC data sources tables by default and uses ORC 1.4.1 library to read it.

Not only your issue but also vectorization on ORC are supported.

Anyway, again, HDP 2.6.3+ is already shipped with ORC 1.4.1 with vectorization, too.

avatar
Expert Contributor

As of now, Apache JIRA is `Maintenance in progress`. So, I cannot give you the link. The umbrella ORC JIRA is

https://issues.apache.org/jira/browse/SPARK-20901.

avatar

Thanks for the update, Vectorisation support is one other feature we have been looking for so long

avatar

@Dongjoon Hyun Just want to check if the ORC library version change i.e to ORC 1.4.1 is getting picked or not as part of Spark 2.3 release, I have gone through the PR's under SPARK-20901, but I didn't find any conversation related to ORC library upgrade

avatar
Expert Contributor

I added the comment in the above

avatar
Expert Contributor

In SPARK-20901 `Feature Parity for ORC with Parquet`, you can see the issue links marked as `is blocked by`. Among them, the following issues are what you want to see for ORC library,

- SPARK-21422 Depend on Apache ORC 1.4.0

- SPARK-22300 Update ORC to 1.4.1

In addition to that, the following will convert Hive ORC table into Spark data sources tables to use Apache ORC 1.4.1.

- SPARK-22279 Turn on spark.sql.hive.convertMetastoreOrc by default