Member since
10-19-2017
21
Posts
4
Kudos Received
0
Solutions
06-13-2019
12:45 AM
@Manoj Menon Did you find any such indication with your NameNode ? Like Long GC pause or memory reaching to its max limit (or 95%+ sometimes) Any indication of Long GC pause or System Load on NameNode hosts ? During the time of alert trigger: # top # free -m # less /var/log/hadoop/hdfs/gc.log-* # grep -i 'JvmPauseMonitor' /var/log/hadoop/hdfs/hadoop-hdfs-namenode-*.log | grep -i WARN
... View more
04-03-2018
03:02 AM
@Manoj Menon If this answers your query then please mark this HCC thread as answered by clicking on "Accept" link on the correct answer, That way it will help other HCC users to quickly find the answers.
... View more
03-01-2018
06:57 AM
Thanks for your comment.But after the activity ,HA configurations and cluster Restart the jobs were not starting in the master machine and entry automatically went wrong in the /etc/hosts and after some time i ran the jobs in the HA node after moving the master services to HA . now port also not listening to the IP address in the master.
... View more
02-27-2018
02:43 PM
1 Kudo
I solved this issue by enable some tables from the hbase like system catalog,sequence,function & stats
... View more
01-22-2018
10:36 AM
check hbase gc logs. if there is a GC pause that correlates with these log lines then you got your problem. While master/RS is in GC, it fails to send heatbeat to zookeeper and the zookeeper node expires. In that case you can increase zookeeper timeout and increase ticktime in zookeeper config. this is not directly related but do checkout: https://superuser.blog/hbase-dead-regionserver/
... View more
01-06-2018
11:37 PM
2 Kudos
@Manoj Menon Every environment runs different kind of job and has different hardware. The kind of job, how many hadoop components are you running, which kind of HDP components are involved in the HDP cluster, behaviour of jobs, frequency of job execution etc are completely different hence the values for RAM and storage differes. However if you want a generic guidelines for RAM/Storage then you can refer to the following docs: 1. Determining HDP Memory Configuration Settings https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/determine-hdp-memory-config.html 2. Configuring NameNode Heap Size: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/configuring-namenode-heap-size.html 3. Hardware Planning on Slave & Master Nodes: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/hardware-for-slave.1.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/hardware-selection-master-nodes.1.html
... View more
01-05-2018
02:17 PM
@Manoj Menon Ambari Server and agent will communicate normally even when the NameNode fail over happens. Ambari Server and Agent communication happes using the Secure port 8440 and 8441. Ambari Server and Agent communication is not dependent on the NameNode status/failover. . Please see: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-administration/content/default_network_port_numbers_-_ambari.html.
... View more