Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1918 | 06-15-2020 05:23 AM | |
| 15462 | 01-30-2020 08:04 PM | |
| 2071 | 07-07-2019 09:06 PM | |
| 8112 | 01-27-2018 10:17 PM | |
| 4571 | 12-31-2017 10:12 PM |
09-23-2019
06:54 AM
Just note for HDP version - 3.1 , the spark version is - Apache Spark 2.3.2 and not 2.4
... View more
08-20-2019
01:49 PM
Hi Mike, The controller is responsible for administrative operations, including assigning partitions to brokers and monitoring for broker failures. If we are getting no leaders assigned, one thing you can check is the controller status in the zookeeper cli. To perform ZooKeeper CLI operations you can use the ZooKeeper client by "bin/zkCli.sh" after that execute: get /controller and see if the output is showing an active controller. If the output is showing "null" you can run rmr /controller, this will trigger a new controller assignment. Finally, make sure you don't have authorization issues by checking the server.log files during broker restart. Regards, Manuel.
... View more
07-25-2019
09:22 AM
@David Sanchez other thing please - can you help me with the post - https://community.hortonworks.com/questions/249557/is-it-necessary-to-restart-the-ambari-server-after.html
... View more
07-25-2019
12:59 AM
1 Kudo
I will strongly suggest to reinstall the OS. They extra time you will spend (few hours max) is not worth it the thousand hours you will spend debuging in future. You will never be able to isolate or asses that the previous installation is not affecting you. Unless your case is very particular (e.g. You really DO NOT have access to the machine, etc... ), reinstall the OS and give yourself peace of mind. Nice guide though @Geoffrey Shelton Okot
... View more
07-27-2019
07:19 PM
@Jay as you know we are using script that use API to delete the service , so if no need to restart the ambari server , so I am understand that only deletion by API is that all needed and no need other additional steps - correct me i am I wrong
... View more
07-17-2019
01:29 AM
@Michael Bronson DEBUG/TRACE logging consumes much disk space. Hence it is better to disable them once you have collected enough race/debug logs for your troubleshooting purpose..
... View more
07-07-2019
09:06 PM
// Getting the state of the component ( replace YARN_CLIENT to SPARK2_THRIFTSERVER ) curl -u admin:admin -H "X-Requested-By:ambari" -i -X GET http://<HOST>:8080/api/v1/clusters/<CLUSTER_NAME>/hosts/<HOST_FQDN>/host_components/YARN_CLIENT // Setting the state of the component to INSTALLED curl -u admin:admin -H "X-Requested-By:ambari" -i -X PUT -d '{"RequestInfo":{"context":"Install YARN_CLIENT"}, "Body":{"HostRoles":{"state":"INSTALLED"}}}' http://<HOST>:8080/api/v1/clusters/<CLUSTER_NAME>/hosts/<HOST_FQDN>/host_components/YARN_CLIENT
... View more
07-07-2019
10:27 PM
1 Kudo
@Michael Bronson You can make the following API call to find out on how many nodes the "SPARK2_THRIFTSERVER" is installed. Please check the property "installed_count" in the response. # curl -iv -u admin:admin -H "X-Requested-By: ambari" -X GET "http://$AMBARI_SERVER:8080/api/v1/clusters/hdp/services/SPARK2/components/SPARK2_THRIFTSERVER" If you want to know the list of hosts where the "SPARK2_THRIFTSERVER" might be installed in your cluster and to know their current state you can make the following API call. # curl -iv -u admin:admin -H "X-Requested-By: ambari" -X GET "http://$AMBARI_SERVER:8080/api/v1/clusters/hdp/services/SPARK2/components/SPARK2_THRIFTSERVER?fields=host_components/HostRoles/host_name,host_components/HostRoles/state" .
... View more
07-06-2019
08:59 PM
The above question and the entire response thread below was originally posted in the Community Help track. On Wed Jun 26 21:14 UTC 2019, a member of the HCC moderation staff moved it to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself, not technical questions about using Ambari or upgrading HDP.
... View more
07-05-2019
08:04 AM
@Michael Bronson There should be ideally no Gap in time. All the cluster nodes should be in exact time sync. Else there can be many issues like Metrics data (incorrect time), Kerberos ticket issue like well known "Clock skew" error ...etc https://community.hortonworks.com/content/supportkb/187925/gssexception-failure-unspecified-at-gss-api-level.html
... View more