Member since
10-07-2015
4
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5204 | 12-10-2015 04:14 AM | |
2923 | 12-09-2015 09:37 PM |
12-10-2015
04:14 AM
1 Kudo
Hajime, the above scripts are for the yarn container and mapreduce memory settings. If you are trying to configure the memory of the nodemanager process itself then that shouldn't need more than 2GB - 4GB. If you are seeing outOfMemory there I suggest you turn on verbose GC for the nodemanager process and review the GC logs.
... View more
12-10-2015
12:01 AM
Hi Hajime, typically we set the NodeManager heap to 2GB - 4GB - I haven't had to set it higher than that. What is it currently set to? -Koelli
... View more
12-09-2015
09:37 PM
1 Kudo
-XX:NewSize and - XX:MaxnewSize should be 1/8 of the maximum heap size (-Xmx). So if -Xmx is set to 8GB, the --XX:NewSize=- XX:MaxnewSize should be set to 1GB. The value of dfs.namenode.handler.count is calculated based on the number of datanodes in the cluster. @Arpit Agarwal suggested that the value for this should be ln(number of datanodes)*20. For example in a 450 node cluster it can be set to around 180. Thanks, Koelli
... View more
12-04-2015
06:03 PM
We can also enable verbose GC logging in the worker child opts to look at the GC logs to understand if it is OutOfMemory: -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
... View more