I am Hive-testbench (http://blog.moserit.com/benchmarking-hive) to test some queries. By default, using the ./tpcds-setup.sh 10 what is the file format will my hive tables have (since in hdfs they are listed with a .deflate extesion)? I think the best file formats for performance are either ORC orc parquet, how can i generate in those formats?
Deflate is not a format. But if the file is in compressed state then the extension of your file in HDFS will be mentioned as .defalte. As you stated ORC performance better during loading the table. Parquet and Avro also serves its own purpose. When I have tested a table with 3 billion records the time taken for loading a hive table with specific format were
Parquet. In ascending order of time taken. ORC being the least amount of time taken during loading. But if your file format is dynamic then its better to go with parquet/Avro.
So although it presents itself as .deflate, basicly it's orc? Spark queries can query .parquet files, it will be able to query in these files with deflate format?
Yes even though if its expressed as .deflate its in ORC with compressed state. I thik you will be able to read the files through hive tables in Spark SQL but you cant use the underneath files in it as it is compressed. If you want to read the files then load the hive tables without any compression and then Spark can make use of that file underneath.