Member since
07-09-2019
422
Posts
97
Kudos Received
58
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
427 | 07-06-2025 05:24 AM | |
443 | 05-28-2025 10:35 AM | |
2152 | 08-26-2024 08:17 AM | |
2753 | 08-20-2024 08:17 PM | |
1134 | 07-08-2024 04:45 AM |
04-14-2021
11:00 PM
Hi Cloudera Support team, Can you please help us with it. Is there any update on it.
... View more
03-18-2021
03:44 AM
2 Kudos
why am I not seeing python and sh options as well? # you need to install python and sh interpreter manually, refer to the following doc on how to install it https://zeppelin.apache.org/docs/0.8.0/usage/interpreter/installation.html Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-12-2021
01:50 AM
1 Kudo
I modified the permission to 777 for /user After which I am able to install Spark and able to install history server. sudo -u hdfs hadoop fs -chmod 777 /user
... View more
03-07-2021
09:33 PM
2 Kudos
@bvishal given properties will disable authentication for webui's and it will allow anonymous users to access the web UI's of hdfs, yarn, and mapreduce.
... View more
01-24-2021
09:22 AM
@AHassan You can use the below command to start HiveServer2 from the command line #su $HIVE_USER nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=/tmp/hiveserver2HD.out 2 /tmp/hiveserver2HD.log Refer below doc for more info on starting hdp services from the command line https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.0/bk_reference/content/starting_hdp_services.html
... View more
01-18-2021
08:50 AM
Solved. Thank you
... View more
01-15-2021
10:40 AM
@Christy09 You can use the below API to delete/ remove clients # curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTERNAME/hosts/HOSTNAME/host_components/<SERVICE NAME> Example: # curl -u admin:<Password> -H "X-Requested-By: ambari" -X DELETE http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTERNAME/hosts/HOSTNAME/host_components/YARN_CLIENT
... View more
01-14-2021
08:13 PM
@Christy09 Use the below API to put a node/host in maintenance mode curl -u <Username>:<Password> -H 'X-Requested-By:ambari' -i -X PUT -d '{"RequestInfo":{"context":"Turn On Maintenance Mode for host","query":"Hosts/host_name.in(<HOST_NAME>)"},"Body":{"Hosts":{"maintenance_state":"ON"}}}' http://<AMBARI_HOST>:8080/api/v1/clusters/<CLUSTER_NAME>/hosts Example: curl -u admin:<PASSWORD> -H 'X-Requested-By:ambari' -i -X PUT -d '{"RequestInfo":{"context":"Turn On Maintenance Mode for host","query":"Hosts/host_name.in(c486-node2.coelab.cloudera.com)"},"Body":{"Hosts":{"maintenance_state":"ON"}}}' http://c486-node1.coelab.cloudera.com:8080/api/v1/clusters/c486/hosts
... View more
12-21-2020
04:34 AM
@bvishal it is expected that you get above error if you do not auth for the browser. You have two options: Either you can disable the spnego authentication or enable spnego authentication you can disable SPNEGO auth by setting below properties in Advanced core-site: hadoop.http.authentication.simple.anonymous.allowed = true
hadoop.http.authentication.type = simple To enable spnego authentication follow below article https://community.cloudera.com/t5/Community-Articles/User-authentication-from-Windows-Workstation-to-HDP-Realm/ta-p/245957
... View more