Created on 01-04-2017 06:42 PM - edited 09-16-2022 03:53 AM
Before you read further, keep in mind that I am a SysAdmin and I'm not too familiar with development related configuration.
After changing the replication factor on our hadoop system to 2, I'm enforcing the replication factor on the existing files with the following command line as the hdfs user from my primary NameNode (HA configuration)
hdfs setrep -R -w 2 /
But then after a while I get this error: java.lang.OutOFMemoryError: Java heap space
For java HEAP configuration on Ambari-YARN there's quite a few configuration:
ResourceManager Java heap size, NodeManager Java heap size, AppTimelineServer Java heap size & YARN Java heap size which are all set to 1024 MB.
To be very honest, I've done some of the official hortonworks administration courses but I'm not quite sure what is considered a proper "heap" size configuration for my developers to run jobs for each of the configs stated above and what is the corrolation between them.
Also could the error above have related errors to the host itself.
Please help, thanks!
At the time of writing, I am generating a log to capture the entire error and will post a clean output on this post.
Created 01-05-2017 03:53 AM
please check if the heap configuration is as per the recommendation here
Created 06-02-2017 11:29 AM
It's problem with your client java configuration not with cluster instances (ResourceManager, NodeMagaer, NN and others). So you need increase java heap for hadoop client:
export HADOOP_OPTS="$HADOOP_OPTS -Xmx4G"