Member since
04-10-2014
2
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
45115 | 04-11-2014 10:31 AM |
04-11-2014
10:31 AM
2 Kudos
Researching some information about tuning YARN memory to fit the cluster size, I found the following settings to work for my configuration: yarn.nodemanager.resource.memory-mb = 20GB yarn.scheduler.minimum-allocation-mb = 4GB yarn.scheduler.maximum-allocation-mb = 20GB mapreduce.map.memory.mb = 4GB mapreduce.reduce.memory.mb = 8GB mapreduce.map.java.opts = 3.2GB mapreduce.reduce.java.opts = 6.4GB yarn.app.mapreduce.am.resource.mb = 8GB yarn.app.mapreduce.am.command-opts = 6.4GB That allowed my particular Hive query to execute on our 10 node cluster with 30GB physical RAM each.
... View more