Before you read further, keep in mind that I am a SysAdmin and I'm not too familiar with development related configuration.
After changing the replication factor on our hadoop system to 2, I'm enforcing the replication factor on the existing files with the following command line as the hdfs user from my primary NameNode (HA configuration)
hdfs setrep -R -w 2 /
But then after a while I get this error: java.lang.OutOFMemoryError: Java heap space
For java HEAP configuration on Ambari-YARN there's quite a few configuration:
ResourceManager Java heap size, NodeManager Java heap size, AppTimelineServer Java heap size & YARN Java heap size which are all set to 1024 MB.
To be very honest, I've done some of the official hortonworks administration courses but I'm not quite sure what is considered a proper "heap" size configuration for my developers to run jobs for each of the configs stated above and what is the corrolation between them.
Also could the error above have related errors to the host itself.
Please help, thanks!
At the time of writing, I am generating a log to capture the entire error and will post a clean output on this post.