Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

ORC vs Parquet - When to use one over the other

avatar

Hi All,

While ORC and Parquet are both columnar data stores that are supported in HDP, I was wondering if there was additional guidance on when to use one over the other? Or things to consider before choosing which format to use?

Thanks,

Andrew

1 ACCEPTED SOLUTION

avatar
Rising Star

In my mind the two biggest considerations for ORC over Parquet are:

1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.

2. ACID transactions are only possible when using ORC as the file format.

View solution in original post

3 REPLIES 3

avatar
Master Mentor

@awatson@hortownorks.com

This blog is very useful. I share it with customers and prospects link

This focus on efficiency leads to some impressive compression ratios. This picture shows the sizes of the TPC-DS dataset at Scale 500 in various encodings. This dataset contains randomly generated data including strings, floating point and integer data.

326-orcfile.png

Very well written - link

One thing to Note: Parquet default compression is SNAPPY.

This is not official statement. Based on aggressive testing in one of the environments

ORC+Zlib has better performance than Paqruet + Snappy

avatar
Rising Star

In my mind the two biggest considerations for ORC over Parquet are:

1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.

2. ACID transactions are only possible when using ORC as the file format.

avatar
Master Mentor

@Andrew Watson has this been resolved? Can you accept best answer or provide your own solution?