my team using TEZ, in particular Hive, has noticed that during an insert with a very simple select a single parquet file of 1.5 gb per partition is generated in the output table.
To try to remedy the problem, a number of settings were used at the session level but had no effect.
Below are the sets used at the session level:
SET hive.vectorized.execution.reduce.enabled = true;
SET hive.vectorized.execution.reduce.groupby.enabled = true;
I would like to ask if there is a way to always have parquet type files but broken up into smaller files as shown in the image below
We cannot understand what the cause might be.
Files structured in this way do not guarantee sufficient parallelism for other jobs present (such as sqoop)
Hi @HadoopHero ,
For Hive, if there is a single reduce task to write the output data it will not break it up the output file into smaller files, that's expected and cannot be configured to behave in a different way.
With DISTRIBUTE BY you should be able to achieve to have multiple reducers (if you have a column by which you can "split" your data reasonably into smaller subsets), see
Hi @HadoopHero ,
If the query involves dynamic partitioning, one potential issue is that 'hive.optimize.sort.dynamic.partition.threshold' may limit the number of open record writers to just one per partition value, resulting in the creation of only one file. To investigate this, could you attempt disabling 'hive.optimize.sort.dynamic.partition.threshold' entirely?
Note : The problem statement contains a typo in the config name
@HadoopHero Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
@HadoopHero Answer would vary based on query that you are running, assuming you have simple "Insert select */cols from Table" it is likely mapper only job and you may want to try tuning below.
set tez.grouping.min-size=134217728; -- 128 MB min split set tez.grouping.max-size=1073741824; -- 1 GB max split
Try setting min-size and max-size to same value. I would not go below 128M.
@HadoopHero Has the reply helped resolve your issue? Thanks.