Support Questions

Find answers, ask questions, and share your expertise

Hive increase map join local task memory

avatar
Contributor

Is there a way in HDP >= v2.2.4 to increase the local task memory? I'm aware of disabling/limiting map-only join sizes, but we want to increase, not limit it.

Depending on the environment, the memory allocation will shift, but it appears to be entirely to Yarn and Hive's discretion.

"Starting to launch local task to process map join;maximum memory = 255328256 => ~ 0.25 GB"

I've looked at/tried:

  • hive.mapred.local.mem
  • hive.mapjoin.localtask.max.memory.usage - this is simply a percentage of the local heap. I want to increase, not limit the mem.
  • mapreduce.map.memory.mb - only effective for non-local tasks

I found documentation suggesting 'export HADOOP_HEAPSIZE="2048"' to change from the default, but this applied to the nodemanager.

Any way to configure this on a per-job basis?

EDIT

To avoid duplication, the info I'm referencing comes from here: https://support.pivotal.io/hc/en-us/articles/207750748-Unable-to-increase-hive-child-process-max-hea...

Sounds like a per-job solution is not currently available with this bug.

1 ACCEPTED SOLUTION

avatar
Contributor

It's a bug in Hive - you can disable hive.auto.convert.join or set the memory at a global level via HADOOP_HEAPSIZE, but it does not solve the question of setting the local task memory on a per-job basis.

View solution in original post

17 REPLIES 17

avatar

What client are you using to run the query? If its Hive CLI then you can run export HADOOP_OPTS="-Xmx2048m" on the shell and then invoke the hive cli.

avatar

@Michael Miklavcic you have to increate tez container size: hive.tez.container.size and hive.tez.java.opts (should be 80% of container size) to have more memory available.

Then, you can increase hive.auto.convert.join.noconditionaltask.size to automatically convert mapjoins or set

hive.ignore.mapjoin.hint=false and use mapjoin hine (select /*+ MAPJOIN(dimension_table_name) */ ...)

avatar
Contributor

For those upvoting this answer, this is the correct answer for increasing mem for mapper Yarn containers, but will not work in cases where Hive is optimizing by creating a local task. What happens is that it generates a hash table of values for the map-side join first on a local node, then uploads this to HDFS for distribution to all mappers that need the fast lookup table. It's the local task that is the problem here, and the only way to fix this is to bail on the map-side join optimization, or change your HADOOP_HEAPSIZE on a global level through Ambari. Not elegant, but it is a workaround.

avatar
New Contributor

Thanks @Guilherme Braccialli Increasing hive.auto.convert.join.noconditionaltask.size fixed our problem. UpVoted !

avatar
Contributor

Hi @Alind Billore How much memory you increased for this property? I too face this issue with below settings.

set hive.auto.convert.join.noconditionaltask.size=3300000000;

avatar
Contributor

now, i've solved this issue by setting below property. Now, all mapper/reducer's output will not be stored in memory. But, i need to revisit my table data and predicates(where clause) once again to check if any unnecessary data is fetched.

set hive.auto.convert.join=false;

avatar
Contributor

Doesn't seem to work. Did the following:

$ export HADOOP_OPTS="-Xmx1024m"

$ hive -f test.hql > results.txt

...

Starting to launch local task to process map join;maximum memory = 511180800 = 0.5111808GB

...

avatar

@Michael Miklavcic check hive.mapjoin.localtask.max.memory.usage, it's the percentage of memory dedicated to local mapjoin task.

avatar
Contributor

@Guilherme Braccialli, that doesn't increase memory allocation for the local task. It's a percentage threshold before the job is automatically killed. It's already at 90% by default, so at this point the only option is to increase the local mem allocation. I tested the "HADOOP_HEAPSIZE" option from Ambari, and it works, but it's global.