Created 07-03-2018 08:05 AM
I am exporting Hive table data to csv files in HDFS using such queries
FROM Table T1 INSERT OVERWRITE DIRECTORY '<HDFS Directory>' SELECT *;
Hive is writing many small csv files(1-2MB) to the destination directory.
Is there a way to control the number of files or the size of csv files?
Note:
1) These csv files are not used for creating tables out of them so cannot replace the query with INSERT INTO TABLE...
2) Already tried these setting values to no avail
hive.merge.mapfiles=true; hive.merge.mapredfiles hive.merge.smallfiles.avgsize hive.merge.size.per.task mapred.max.split.size mapred.min.split.size;
TIA
I have many tables in Hive with varying size. Some are very large and some are small. I am fine if for large tables many files are generated till each file is larger than 16 MB. I don't want to explicitly set the number of mappers because that will hamper query performance for large tables.
Created 07-03-2018 08:42 AM
If you are using Tez as execution engine, then you need to set below properties:
set hive.merge.tezfiles=true; set hive.merge.smallfiles.avgsize=128000000; set hive.merge.size.per.task=128000000;
Created 07-03-2018 08:42 AM
If you are using Tez as execution engine, then you need to set below properties:
set hive.merge.tezfiles=true; set hive.merge.smallfiles.avgsize=128000000; set hive.merge.size.per.task=128000000;
Created 07-03-2018 01:24 PM
That works, thank you.