Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2721 | 12-06-2018 12:25 PM | |
| 2863 | 11-27-2018 06:00 PM | |
| 2194 | 11-22-2018 03:42 PM | |
| 3567 | 11-20-2018 02:00 PM | |
| 6276 | 11-19-2018 03:24 PM |
12-05-2017
03:48 PM
@Michael Bronson, Yes. Ambari calls the same API internally. If you want to run it from command line instead of Ambari GUI , you can use the API. That should take care of clearing the entries from DB. You need not delete the entries from DB manually
... View more
12-05-2017
03:35 PM
1 Kudo
@Michael Bronson, Add a semicolon at the end of the statement ambari=> Delete from hoststate where host_id=255;
ambari=> Delete from hosts where host_id=255; Note: Do the above with caution. Before removing the node from cluster it is advised to move all the services to other nodes and make sure that the node is clean before deleting the node. Below is the API to delete the node curl -u {ambari-user}:{ambari-password} -H "X-Requested-By: ambari"-X DELETE http://{ambari-host}:{ambari-port}/api/v1/clusters/{clustername}/hosts/{hostname} Thanks, Aditya
... View more
12-05-2017
02:52 PM
@Mike Bit, This error can occur because of jdbc driver version. Replace the zeppelin hive jdbc jar with the one provided by hive server. Hopefully that should work. Thanks, Aditya
... View more
12-05-2017
12:22 PM
@Michael Bronson, serviceId is different from the namenode host name. I think that is fine. There is no conflict on that.
... View more
12-05-2017
12:04 PM
@Jon Udaondo, I do not see any errors in the above logs. Can you do a tail on these logs and restart the region servers to see if there are any ERROR logs printed. That would be helpful for debugging.
... View more
12-05-2017
11:54 AM
@Jon Udaondo, Can you please attach the region server logs located under (/var/log/hbase/hbase-hbase-regionserver-{hostname}.log) Thanks, Aditya
... View more
12-05-2017
11:45 AM
@Michael Bronson, Just summarising things. The original thread discussed here is "Unable to determine service address for namenode nn1" which was due to the usage of wrong service Id in the comment. You were using 'master01' and 'master03' instead of 'nn1' and 'nn2'. After using the correct service Id you got past the initial error and you are facing connection refused error because the Name nodes are not started. I see another thread opened for the same issue ( https://community.hortonworks.com/questions/149951/how-to-force-name-node-to-be-active.html). Please do not deviate from the main issue. If you think that the main issue discussed in this thread is resolved, please accept the answer and follow up on a single thread. It will be easy for other community users to follow the thread and understand the root cause. Hope this helps 🙂 Thanks, Aditya
... View more
12-05-2017
10:47 AM
@Michael Bronson, The hostname looks different in 2 places. hdfs getconf -namenodes gives 'master01.sys56.com' and the above logs give 'master01.sys564.com' Is it sys56 or sys564. Check the hostname properly and start HDFS Make sure that below properties are set correctly. dfs.namenode.rpc-address.hdfsha.nn1 and dfs.namenode.rpc-address.hdfsha.nn2 Thanks, Aditya
... View more
12-05-2017
10:30 AM
@Michael Bronson, Run the check health as below hdfs haadmin -checkHealth nn1 hdfs haadmin -checkHealth nn2
... View more
12-05-2017
09:00 AM
@Michael Bronson, I need the value. grep -A 3 dfs.ha.namenodes /etc/hadoop/conf/hdfs-site.xml
... View more