Created 05-08-2017 12:40 AM
Hello,
I am Hive-testbench (http://blog.moserit.com/benchmarking-hive) to test some queries. By default, using the ./tpcds-setup.sh 10 what is the file format will my hive tables have (since in hdfs they are listed with a .deflate extesion)? I think the best file formats for performance are either ORC orc parquet, how can i generate in those formats?
Thanks
Created 05-08-2017 01:36 AM
@mÁRIO Rodrigues use https://github.com/hortonworks/hive-testbench. Default format is ORC.
Created 05-08-2017 01:36 AM
@mÁRIO Rodrigues use https://github.com/hortonworks/hive-testbench. Default format is ORC.
Created 05-08-2017 01:37 AM
Note: Parquet is supported for LLAP but will not by cached.
Created 05-08-2017 06:28 AM
Deflate is not a format. But if the file is in compressed state then the extension of your file in HDFS will be mentioned as .defalte. As you stated ORC performance better during loading the table. Parquet and Avro also serves its own purpose. When I have tested a table with 3 billion records the time taken for loading a hive table with specific format were
ORC
Avro
Parquet. In ascending order of time taken. ORC being the least amount of time taken during loading. But if your file format is dynamic then its better to go with parquet/Avro.
Created 05-09-2017 12:37 AM
So although it presents itself as .deflate, basicly it's orc? Spark queries can query .parquet files, it will be able to query in these files with deflate format?
Created 05-09-2017 03:38 PM
Yes even though if its expressed as .deflate its in ORC with compressed state. I thik you will be able to read the files through hive tables in Spark SQL but you cant use the underneath files in it as it is compressed. If you want to read the files then load the hive tables without any compression and then Spark can make use of that file underneath.
Created 05-08-2017 09:30 AM
Refer to blog for details
Created 05-09-2017 12:34 AM
Thank you all for the answers!