Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2186 | 06-15-2020 05:23 AM | |
| 18840 | 01-30-2020 08:04 PM | |
| 2345 | 07-07-2019 09:06 PM |
04-06-2019
08:08 PM
1 Kudo
@Michael Bronson Find the hostnames where the "SPARK2_THRIFTSERVER" server is running: # curl -H "X-Requested-By: ambari" -u admin:admin -X GET "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts?(host_components/HostRoles/component_name=SPARK2_THRIFTSERVER)&minimal_response=true" | grep host_name | awk -F":" '{print $2}' | awk -F"\"" '{print $2}' Example Output: newhwx3.example.com newhwx5.example.com Once we know the hosts where the "SPARK2_THRIFTSERVER" is running then we can run the following command by replacing the host newhws3 and newhwx5 to turn ON the maintenance mode for it. # curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn ON Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"ON"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx3.example.com/host_components/SPARK2_THRIFTSERVER"
# curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn ON Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"ON"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx5.example.com/host_components/SPARK2_THRIFTSERVER" . Turn OFF maintenance Mode for Spark2 thrift server on newhwx3 and newhws5 # curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn OFF Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"OFF"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx3.example.com/host_components/SPARK2_THRIFTSERVER"
# curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn OFF Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"OFF"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx5.example.com/host_components/SPARK2_THRIFTSERVER" .
... View more
04-01-2019
09:53 AM
1 Kudo
@Michael Bronson, Permission issue 🙂 Either run this command with hdfs user or change the ownership of /benchmarks/TestDFSIO to root. java.io.IOException: Permission denied: user=root, access=WRITE, inode="/benchmarks/TestDFSIO/io_control/in_file_test_io_0":hdfs:hdfs:drwxr-xr-x
... View more
03-08-2019
08:46 AM
1 Kudo
Hi @Michael Bronson You are specifying /folder/*.jar. If you want the .jar files from one level deeper, you would specify /folder/*/*.jar. Or, here is an alternative example. [hdfs@c2175-node4 stuff]$ hdfs dfs -find /tmp -name *.jar
/tmp/somefolder/y.jar
/tmp/x.jar
[hdfs@c2175-node4 stuff]$ for result in `hdfs dfs -find /tmp -name *.jar` ; do hdfs dfs -copyToLocal $result; done
[hdfs@c2175-node4 stuff]$ ls -al
-rw-r--r-- 1 hdfs hadoop 0 Mar 8 08:43 x.jar
-rw-r--r-- 1 hdfs hadoop 0 Mar 8 08:43 y.j
... View more
02-27-2019
02:13 AM
@Michael Bronson I do not think it is possible. This is because you are talking about two different File Systems (HDFS and Local FileSystem). If you want to keep syncing your Local Data Directory to HDFS directory then you can make use of some tools like Apache Flume.
... View more
02-12-2019
03:14 PM
@Michael Bronson Just create the home directory as follows # su - hdfs
$ hdfs dfs -mkdir /user/slider
$ hdfs dfs -chown slider:hdfs /user/slider That should be enough .. good luck
... View more
02-08-2019
12:01 AM
@Geoffrey - thank you for the excellent answer , little question , could you please help me with this thread - https://community.hortonworks.com/questions/239890/kafka-what-could-be-the-reasons-for-kafka-broker-i.html
... View more
02-10-2019
10:14 PM
1 Kudo
@Michael Bronson HWX doesn't recommend upgrading an individual HDP component because one never knows the incompatibilities that could impact the other components and component selective upgrades tend to be a nightmare during a version upgrade The lastest HDP Kafka version is 11-2.1.x delivered by HDP 3.1 but ASF has its own rollout version and naming convention HTH
... View more
02-03-2019
07:33 AM
@Michael Bronson As we see that it is basically a "500 Error" which basically indicates an Internal Server Error hence you must see a very detailed Stack Trace inside your ambari-server.log. Can you please share the complete ambari-server.log so that we can check what might be failing.
... View more
01-28-2019
11:57 AM
1 Kudo
@Michael Bronson If you have exhausted all other avenues YES, Step 1 Check and compare the /usr/hdp/current/kafka-broker symlinks Step 2 Download both env'es as backup from the problematic and functioning cluster Upload the functioning cluster env to the problematic one, since you have a backup Start kafka through ambari Step 3 sed -i 's/verify=platform_default/verify=disable/'/etc/python/cert-verification.cfg Step 4 Lastly, if the above steps don't remedy the issue, then remove and -re-install the ambari-agent and remember to manually point to the correct ambari server in the ambari-agent.ini
... View more
01-25-2019
03:13 PM
1 Kudo
@Michael Bronson No, unfortunately, I don't have a test cluster, the configuration looks straight forward just create a yaml i.e kafka.yaml file in /etc/kafka_discovery which you export as KAFKA_DISCOVERY_DIR look at the README.md file. Can you tokenize your sensitive hostname and share the YAML file you created? I am sure we can sort that out I can only spin a single node kafka broker this weekend and test. Please revert
... View more
- « Previous
- Next »