Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive Table formats

avatar
Explorer

Hello,

I am Hive-testbench (http://blog.moserit.com/benchmarking-hive) to test some queries. By default, using the ./tpcds-setup.sh 10 what is the file format will my hive tables have (since in hdfs they are listed with a .deflate extesion)? I think the best file formats for performance are either ORC orc parquet, how can i generate in those formats?

Thanks

1 ACCEPTED SOLUTION

avatar
7 REPLIES 7

avatar

avatar

Note: Parquet is supported for LLAP but will not by cached.

avatar

@mÁRIO Rodrigues

Deflate is not a format. But if the file is in compressed state then the extension of your file in HDFS will be mentioned as .defalte. As you stated ORC performance better during loading the table. Parquet and Avro also serves its own purpose. When I have tested a table with 3 billion records the time taken for loading a hive table with specific format were

ORC

Avro

Parquet. In ascending order of time taken. ORC being the least amount of time taken during loading. But if your file format is dynamic then its better to go with parquet/Avro.

avatar
Explorer

So although it presents itself as .deflate, basicly it's orc? Spark queries can query .parquet files, it will be able to query in these files with deflate format?

avatar

@mÁRIO Rodrigues

Yes even though if its expressed as .deflate its in ORC with compressed state. I thik you will be able to read the files through hive tables in Spark SQL but you cant use the underneath files in it as it is compressed. If you want to read the files then load the hive tables without any compression and then Spark can make use of that file underneath.

avatar
@mÁRIO Rodrigues

Refer to blog for details

avatar
Explorer

Thank you all for the answers!