Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

java.lang.OutOFMemoryError Java heap space - From NN

avatar
Rising Star

Before you read further, keep in mind that I am a SysAdmin and I'm not too familiar with development related configuration.

After changing the replication factor on our hadoop system to 2, I'm enforcing the replication factor on the existing files with the following command line as the hdfs user from my primary NameNode (HA configuration)

hdfs setrep -R -w 2 /

But then after a while I get this error: java.lang.OutOFMemoryError: Java heap space

For java HEAP configuration on Ambari-YARN there's quite a few configuration:

ResourceManager Java heap size, NodeManager Java heap size, AppTimelineServer Java heap size & YARN Java heap size which are all set to 1024 MB.

To be very honest, I've done some of the official hortonworks administration courses but I'm not quite sure what is considered a proper "heap" size configuration for my developers to run jobs for each of the configs stated above and what is the corrolation between them.

Also could the error above have related errors to the host itself.

Please help, thanks!

At the time of writing, I am generating a log to capture the entire error and will post a clean output on this post.

2 REPLIES 2

avatar
Rising Star

please check if the heap configuration is as per the recommendation here

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_installing_manually_book/content/ref-809...

avatar
New Contributor

It's problem with your client java configuration not with cluster instances (ResourceManager, NodeMagaer, NN and others). So you need increase java heap for hadoop client:

export HADOOP_OPTS="$HADOOP_OPTS -Xmx4G"