Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1921 | 06-15-2020 05:23 AM | |
| 15476 | 01-30-2020 08:04 PM | |
| 2074 | 07-07-2019 09:06 PM | |
| 8121 | 01-27-2018 10:17 PM | |
| 4571 | 12-31-2017 10:12 PM |
02-09-2019
06:15 PM
@Geoffrey - yes all hosts and IP's have resolving
... View more
02-10-2019
10:14 PM
1 Kudo
@Michael Bronson HWX doesn't recommend upgrading an individual HDP component because one never knows the incompatibilities that could impact the other components and component selective upgrades tend to be a nightmare during a version upgrade The lastest HDP Kafka version is 11-2.1.x delivered by HDP 3.1 but ASF has its own rollout version and naming convention HTH
... View more
02-03-2019
07:33 AM
@Michael Bronson As we see that it is basically a "500 Error" which basically indicates an Internal Server Error hence you must see a very detailed Stack Trace inside your ambari-server.log. Can you please share the complete ambari-server.log so that we can check what might be failing.
... View more
01-28-2019
11:57 AM
1 Kudo
@Michael Bronson If you have exhausted all other avenues YES, Step 1 Check and compare the /usr/hdp/current/kafka-broker symlinks Step 2 Download both env'es as backup from the problematic and functioning cluster Upload the functioning cluster env to the problematic one, since you have a backup Start kafka through ambari Step 3 sed -i 's/verify=platform_default/verify=disable/'/etc/python/cert-verification.cfg Step 4 Lastly, if the above steps don't remedy the issue, then remove and -re-install the ambari-agent and remember to manually point to the correct ambari server in the ambari-agent.ini
... View more
01-25-2019
03:13 PM
1 Kudo
@Michael Bronson No, unfortunately, I don't have a test cluster, the configuration looks straight forward just create a yaml i.e kafka.yaml file in /etc/kafka_discovery which you export as KAFKA_DISCOVERY_DIR look at the README.md file. Can you tokenize your sensitive hostname and share the YAML file you created? I am sure we can sort that out I can only spin a single node kafka broker this weekend and test. Please revert
... View more
01-24-2019
02:38 PM
1 Kudo
@Michael Bronson Zookeeper's usually have the same port so I suggest you have a look at the brokers. Please have a look at my response to a similar problem https://community.hortonworks.com/questions/235766/kafka-socket-server-failed-bind-exception.html?childToView=234784#answer-234784 HTH
... View more
01-21-2019
09:53 AM
@Michael Bronson There is a Type in your URL. The spelling of "FSNamesytem" (should be "FSNamesystem") one character 's' is missing in the word. So please try this: # curl -u admin:admin -H "X-Requested-By: ambari" -X GET "http://name2:8080/api/v1/clusters/clu45/host_components?HostRoles/component_name=NAMENODE&metrics/dfs/FSNamesystem/HAState=standby .
... View more
01-15-2019
11:51 AM
1 Kudo
@Michael Bronson Step1). You can get all the Hostnames where the DataNode is present using the following API call: # curl --user admin:admin -H 'X-Requested-By: ambari' -X GET "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/services/HDFS/components/DATANODE?fields=host_components/HostRoles/host_name" | grep host_name | awk '{print $NF}' | awk -F'"' '{print $2}'
. Step2). Once you have the list of the hosts where the Datanode is installed (using above API call) then you can use it to make the following API call using some shell script to replace the $HOST with the hostname. # curl --user admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"RequestInfo":{"command":"RESTART","context":"Restart all components on $HOST","operation_level":{"level":"HOST","cluster_name":"NewCluster"}},"Requests/resource_filters":[{"service_name":"HDFS","component_name":"DATANODE","hosts":"$HOST"}, {"service_name":"YARN","component_name":"NODEMANAGER","hosts":"$HOST"}, {"service_name":"AMBARI_METRICS","component_name":"METRICS_MONITOR","hosts":"$HOST"} ]}' http://ambariserver.example.com:8080/api/v1/clusters/NewCluster/requests . NOTE: In the above call please make sure to replace the $HOST with the hostname one by one (using some shell script Loop iteration) which we retrieved from the previous API call. Also please replace the "NewCluster" with your own cluster name. Also please replace "ambariserver.example.com" with your ambari hostname and credentials accordingly.
... View more
01-21-2019
12:37 PM
@Jay , after we do the decommission on some datanode , do we need also to stop the components on that datanode ? and then replaced the disk , or maybe it is inoufgh to decomission without to stop the componet and then replaced the disk ?
... View more
01-14-2019
06:41 AM
@Geoffrey Shelton Okot , do you mean that we need to check the RAM memory on our data node machines ? , we have on each machine 256G memory and available is 198G , or maybe you want to check other thing?
... View more