Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar
Expert Contributor

Ambari kills the process of the namenode when it detects a java error with type :

HDFS -> Under hadoop-env template replace

-XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\"

by

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=\"/tmp/heap\"

This will only allow you to start the namenode , investigate what is causing the issue.

But this is a temporary solution, you need to analyze your heap dump and see what's wrong with it.

--

SOLR could be one of the causes of this, when creating huge logs that needs to be written to hdfs.

You can clear the logs of NN and SNN here : /var/log/hadoop/hdfs/audit/solr/spool

Becareful on deleting only on Standby NN - then do a failover to delete from the other server. do not delete logs while the namenode is active.

2,070 Views
0 Kudos
Comments

I am facing similar issue where standby NN is not starting. In the hdfs out file we are getting

java.lang.OutOfMemoryError: Requested array size exceeds VM limit.

Can we uncheck Audit to SOLR from Advance ranger audit and then start Standby NN. Will there be any impact on the cluster if we uncheck Audit to SOLR