Member since
09-17-2015
436
Posts
736
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3754 | 01-14-2017 01:52 AM | |
5675 | 12-07-2016 06:41 PM | |
6512 | 11-02-2016 06:56 PM | |
2147 | 10-19-2016 08:10 PM | |
5627 | 10-19-2016 08:05 AM |
10-02-2015
11:43 PM
4 Kudos
Might be easier to invoke the Ambari APIs to stop/start all services. To do this at startup invoke these from your /etc/rc.d/rc.local: USER=admin
PASSWORD=admin
AMBARI_HOST=localhost
#detect name of cluster
output=`curl -s -u $USER:$PASSWORD -i -H 'X-Requested-By: ambari' http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
#stop all services
curl -u $USER:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"_PARSE_.STOP.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"Sandbox"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services
#start all services
curl -u $USER:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"_PARSE_.START.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"Sandbox"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services Instead of starting all services, you can also start services individually. See below example that starts only Kafka, Hbase, Storm USER=admin
PASSWORD=admin
AMBARI_HOST=localhost
#detect name of cluster
output=`curl -s -u $USER:$PASSWORD -i -H 'X-Requested-By: ambari' http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
for SERVICE in KAFKA HBASE STORM
do
echo "starting $SERVICE"
curl -u $USER:$PASSWORD -i -H "X-Requested-By: ambari" -X PUT -d "{\"RequestInfo\": {\"context\" :\"Start $SERVICE via REST\"}, \"Body\": {\"ServiceInfo\": {\"state\": \"STARTED\"}}}" http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
done
... View more
10-02-2015
11:22 PM
I believe its /etc/hbase/conf/hbase-env.sh (which is a pointer to /usr/hdp/current/hbase-client/conf/hbase-env.sh) I see another version of this file under /usr/hdp/current/hbase-regionserver/conf/hbase-env.sh which seems to control the region server setting diff /etc/hbase/conf.install/hbase-env.sh /etc/hbase/conf/hbase-env.sh
8c8
< export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/usr/hdp/current/hbase-client/conf}
---
> export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/usr/hdp/current/hbase-regionserver/conf}
... View more
10-02-2015
10:26 PM
1 Kudo
Are the times in sync on both the machines? Maybe kdestroy and knit again and retry?
... View more
10-02-2015
06:52 PM
Thanks Guilherme. Am checking with them but not sure it will work as they are looking for a way to avoid sysadmin intervention
... View more
10-02-2015
06:36 PM
This shows how to setup a kerborized 2.3 cluster (using LDAP) and setup Kafka and other Ranger plugins: https://github.com/abajwa-hw/security-workshops/blob/master/Security-workshop-HDP%202_3-IPA.md
... View more
10-02-2015
12:15 PM
Getting the below question in the context of an application that creates tables with a custom storage handler: When I tried setting jar locations on the fly using “set hive.aux.jars.path=<file urls>” on beeline prompt after connecting to HS2, it did not work. It works only if the property is set in hive-site.xml before starting HS2, OR set the env var HIVE_AUX_JARS_PATH to a dir containing my jars OR start HS2 with --hiveconf hive.aux.jars.path=… and it then becomes a global setting. I would like it to become a session specific setting as setting it through a global property for the process needs HS2 restart through admin, and looks like not favored by many of our users. Is this the way the property is supposed to work or I am doing something wrong? I need this for our storage handlers, serde, and UDFs to work. Please note that even the “add jar” does not help here. See Hive ML for full question details
... View more
Labels:
- Labels:
-
Apache Hive
10-02-2015
11:56 AM
1 Kudo
These were the steps I used to get the Storm UI working on my Mac on kerborized HDP 2.3 (search for 'Open kerborized browser'): https://github.com/abajwa-hw/security-workshops/blob/master/Setup-ranger-23.md#setup-storm-plugin-for-ranger
... View more
10-02-2015
10:55 AM
Don't see an error here. Can you check logs in /var/log/ranger/kms/?
... View more
10-01-2015
04:32 PM
1 Kudo
To disable admin user in Ranger, Gautham had shared the below steps: Log in to Ranger Admin using admin/admin Go to “Settings” -> “Users/Groups” Select one of the external user synced using LDAP( E.g : ldap_admin ) Change its role to ADMIN Click Save Log out of Ranger Admin Log in to Ranger Admin using the external user (ldap_admin) Go to “Settings” -> “Users/Groups” Click on the checkbox next to admin user Select "Set Status” -> “Disabled” from the top right. Log out of Ranger Admin After this point the default admin user will not be able to login and the new admin will be ldap_admin. Important Note : You will have to change the following Ambari properties to take note of new admin user “Ranger” -> “Configs” -> “Advanced ranger-env” admin_username ( This should be the external user that you designated as admin in 2.1 above ) admin_password ( The password of above user )
... View more