Member since
06-13-2017
7
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1557 | 10-23-2019 05:19 AM |
11-12-2019
11:45 PM
I agree with @sagarshimpi , please provide the full log trace from the datanode and see if you are able to find any disk level exception in the logs are not. Also you can open NN UI-->"Datanode volume failures" tab and to confirm this. Leaving safe mode manually will not solve the problem as NN did not receive any block reports from these data nodes, if you will leave the safe mode manually then you will get missing blocks alerts for all the blocks you have (approx 34 million blocks). Did you change any configuration at hdfs level before restarting the whole HDFS service? If yes, then please share the property name. Perform a disk i/o check on all the 4 data nodes using iostat command and check if disks are working fine without any heavy %util.
... View more
10-31-2019
07:05 AM
@npandey Thanks for your response! 1. The math value for name node handler count according to Cloudera doc dfs.namenode.service.handler.count and dfs.namenode.handler.count - For each NameNode, set to ln(number of DataNodes in this HDFS service) * 20. https://docs.cloudera.com/documentation/enterprise/5-15-x/topics/cm_mc_autoconfig.html#concept_v4y_vb3_rn__section_bgr_d3w_d4 As per you said for example 12 data nodes it should be ln(12)*20 = 50 But you say to follow this 20*log2(12) formula? can you cross-check and let me know on this. 2. I have the dump but can you walk me how to check how many handlers running?
... View more
10-23-2019
05:19 AM
2 Kudos
@ShaatH Yes, it is mandatory to disable yarn ats hbase as described in the document. It will only stop and destroys the saved config of ats-hbase but your actual data will be present. We are destroying the ats-hbase because of the old non-HA or non-secure configuration, destroy will remove the old configuration so that after enabling the HA, ats-hbase can pick up the right and latest configs.
... View more
06-29-2017
03:11 PM
@npandey This can happen due to conflict in RuntineDelegate from Jersey in yarn client libs and the copy in spark's assembly jar. Please refer to below article for more information. https://community.hortonworks.com/articles/101145/spark-job-failure-with-javalanglinkageerror-classc.html
Also, note that hive-site.xml should contain only Spark related properties like metastore information. You can download this for spark job from download client configs option in Ambari.
Passing the complete file(/etc/hive/conf/hive-site.xml) may have ATS related related properties which can also cause this issue.
... View more