Created 11-07-2023 12:23 PM
Hello everyone,
my team using TEZ, in particular Hive, has noticed that during an insert with a very simple select a single parquet file of 1.5 gb per partition is generated in the output table.
To try to remedy the problem, a number of settings were used at the session level but had no effect.
Below are the sets used at the session level:
SET hive.execution.engine=tez;
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.optimise.sort.dynamic.partition.threshold=0;
--SET tez.grouping.max-size=268435456;
--SET hive.exec.reducers.bytes.per.reducer=536870912;
--SET tez.grouping.split-count=18;
SET hive.vectorized.execution.reduce.enabled = true;
SET hive.vectorized.execution.reduce.groupby.enabled = true;
--SET hive.tez.auto.reducer.parallelism=false;
--SET mapred.reduce.tasks=12;
--SET hive.tez.partition.size=104857600;
--SET hive.tez.partition.num=10;
SET hive.parquet.output.block.size=104857600;
I would like to ask if there is a way to always have parquet type files but broken up into smaller files as shown in the image below
We cannot understand what the cause might be.
Files structured in this way do not guarantee sufficient parallelism for other jobs present (such as sqoop)
Created 11-08-2023 01:27 AM
Hi @HadoopHero ,
For Hive, if there is a single reduce task to write the output data it will not break it up the output file into smaller files, that's expected and cannot be configured to behave in a different way.
With DISTRIBUTE BY you should be able to achieve to have multiple reducers (if you have a column by which you can "split" your data reasonably into smaller subsets), see
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SortBy
Best regards
Miklos
Created 11-09-2023 12:51 AM
Hello Miklos,
unfortunately, what you suggested had no effect. We continue to have the same problem, with creating a single parquet file.
Created on 11-09-2023 01:41 AM - edited 11-09-2023 02:56 AM
Hi @HadoopHero ,
If the query involves dynamic partitioning, one potential issue is that 'hive.optimize.sort.dynamic.partition.threshold' may limit the number of open record writers to just one per partition value, resulting in the creation of only one file. To investigate this, could you attempt disabling 'hive.optimize.sort.dynamic.partition.threshold' entirely?
SET hive.optimize.sort.dynamic.partition.threshold=-1;
Note : The problem statement contains a typo in the config name
Created 11-13-2023 09:32 AM
@HadoopHero Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
Regards,
Diana Torres,Created 11-13-2023 02:13 PM
@DianaTorres I'm sorry but unfortunately the problem still persists even after trying the suggestions in the previous posts
Created 11-13-2023 03:36 PM
Created 11-20-2023 08:24 PM
@HadoopHero Answer would vary based on query that you are running, assuming you have simple "Insert select */cols from Table" it is likely mapper only job and you may want to try tuning below.
set tez.grouping.min-size=134217728; -- 128 MB min split
set tez.grouping.max-size=1073741824; -- 1 GB max split
Try setting min-size and max-size to same value. I would not go below 128M.
Created 11-24-2023 09:31 AM
@HadoopHero Has the reply helped resolve your issue? Thanks.
Regards,
Diana Torres,