Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

ORC vs Parquet - When to use one over the other

Solved Go to solution
Highlighted

ORC vs Parquet - When to use one over the other

Hi All,

While ORC and Parquet are both columnar data stores that are supported in HDP, I was wondering if there was additional guidance on when to use one over the other? Or things to consider before choosing which format to use?

Thanks,

Andrew

1 ACCEPTED SOLUTION

Accepted Solutions

Re: ORC vs Parquet - When to use one over the other

Contributor

In my mind the two biggest considerations for ORC over Parquet are:

1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.

2. ACID transactions are only possible when using ORC as the file format.

3 REPLIES 3

Re: ORC vs Parquet - When to use one over the other

@awatson@hortownorks.com

This blog is very useful. I share it with customers and prospects link

This focus on efficiency leads to some impressive compression ratios. This picture shows the sizes of the TPC-DS dataset at Scale 500 in various encodings. This dataset contains randomly generated data including strings, floating point and integer data.

326-orcfile.png

Very well written - link

One thing to Note: Parquet default compression is SNAPPY.

This is not official statement. Based on aggressive testing in one of the environments

ORC+Zlib has better performance than Paqruet + Snappy

Re: ORC vs Parquet - When to use one over the other

Contributor

In my mind the two biggest considerations for ORC over Parquet are:

1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.

2. ACID transactions are only possible when using ORC as the file format.

Re: ORC vs Parquet - When to use one over the other

Mentor

@Andrew Watson has this been resolved? Can you accept best answer or provide your own solution?

Don't have an account?
Coming from Hortonworks? Activate your account here