- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
ORC vs Parquet - When to use one over the other
- Labels:
-
Apache Hadoop
-
Apache Hive
Created 10-24-2015 02:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
While ORC and Parquet are both columnar data stores that are supported in HDP, I was wondering if there was additional guidance on when to use one over the other? Or things to consider before choosing which format to use?
Thanks,
Andrew
Created 10-26-2015 04:42 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In my mind the two biggest considerations for ORC over Parquet are:
1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.
2. ACID transactions are only possible when using ORC as the file format.
Created on 10-24-2015 02:29 PM - edited 08-19-2019 05:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@awatson@hortownorks.com
This blog is very useful. I share it with customers and prospects link
This focus on efficiency leads to some impressive compression ratios. This picture shows the sizes of the TPC-DS dataset at Scale 500 in various encodings. This dataset contains randomly generated data including strings, floating point and integer data.
Very well written - link
One thing to Note: Parquet default compression is SNAPPY.
This is not official statement. Based on aggressive testing in one of the environments
ORC+Zlib has better performance than Paqruet + Snappy
Created 10-26-2015 04:42 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In my mind the two biggest considerations for ORC over Parquet are:
1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.
2. ACID transactions are only possible when using ORC as the file format.
Created 02-02-2016 04:35 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Andrew Watson has this been resolved? Can you accept best answer or provide your own solution?
![](/skins/images/C3EF05C688F0C29C1D3298241F61C2B3/responsive_peak/images/icon_anonymous_message.png)