Community Articles
Find and share helpful community-sourced technical articles
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Labels (1)
Rising Star

Ambari kills the process of the namenode when it detects a java error with type :

HDFS -> Under hadoop-env template replace

-XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\"

by

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=\"/tmp/heap\"

This will only allow you to start the namenode , investigate what is causing the issue.

But this is a temporary solution, you need to analyze your heap dump and see what's wrong with it.

--

SOLR could be one of the causes of this, when creating huge logs that needs to be written to hdfs.

You can clear the logs of NN and SNN here : /var/log/hadoop/hdfs/audit/solr/spool

Becareful on deleting only on Standby NN - then do a failover to delete from the other server. do not delete logs while the namenode is active.

788 Views
0 Kudos
Comments
New Contributor

I am facing similar issue where standby NN is not starting. In the hdfs out file we are getting

java.lang.OutOfMemoryError: Requested array size exceeds VM limit.

Can we uncheck Audit to SOLR from Advance ranger audit and then start Standby NN. Will there be any impact on the cluster if we uncheck Audit to SOLR

Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎07-13-2017 01:42 PM
Updated by:
 
Contributors
Top Kudoed Authors