Member since
04-25-2016
10
Posts
6
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4072 | 03-22-2017 09:42 PM | |
2228 | 06-09-2016 09:52 PM |
10-26-2017
01:43 AM
1 Kudo
Scenarios: There are few scenarios where you may need to migrate ranger components from one host to another host in Ambari.
Server that is hosting Ranger master components is faulty & down or need to be replaced for newer hardware.
Reduce the administrative implementation of enabling Ranger HA with load balancer and still have some recovery mechanism to get ranger back online as quickly as possible should you encounter above sitaution.
Design of Ranger has HA builtin where the policies are cached in every plugin such as namenode will have a copy of policies cached on disk in the event ranger admin is not available. If ranger admin/usersync is down administrators will only lose access to add/modify/delete policies and users. In many of the hadoop clusters, users may not need to have high availability access to Ranger UI. It may be ok to survive for few hours without ranger ui and usersync components as existing policies and users should continue to work.
Method:
Stop Ranger service, stop ambari, back up ambari & ranger database, start ambari, use ambari rest api to remove ranger_admin, usersync host components from old host and change one ranger config and add components back using restapi to new host, Click reinstall in ambari, start admin/usersync, restart rest of the ranger dependent services.
Pre-setup:
Choose Failover node for admin/usersync
Ensure firewalls are open between new host and all nodes in the clusters, ldap system and administrators desktop
Ensure ranger backend database is configured to be accessed from new host. i.e mysql permissions - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/configuring_database_instance.html
If using AD unix account for ranger , ensure it will available for the new host
Ensure you have Kerberos admin credentials are available
If using ldaps, Ensure the root and intermediate certs for ldaps are in the truststore for new host. You should be able to use the same trustore jks files from older host.
If Ranger SSL is enabled, Ensure relevant ranger admin and usersync keystore and truststore jks files are ready to use in new host in the same locations. You won't be able to use the same jks files from older host as the common name would be different for the new host. Ensure the root and intermediate CA(if applicable) is also available in java cacerts. Review Ranger SSL Ambari guide.
You can choose to have packages ranger_<Version>-admin & ranger_<Version>-usersync are installed ahead to ensure faster migration and prevent any installation issues during exercising failover.
Ex. yum install ranger_2_4_3_0_227-admin ranger_2_4_3_0_227-usersync
Migration Exercise :
Below are commands straight from my test cluster exercise. Test Cluster is HDP-2.4.3 with MIT Kerberos Enabled and Ranger SSL enabled. Other HDP versions should work using the same method. Will update/comment again if have tested on higher versions. Ranger KMS is out of scope of this exercise. We might be able to follow similar steps for it.
Test Cluster Values
export AMBARI_USER=admin
export AMBARI_PASSWD=admin
export AMBARI_URL=http://wg01.wg.com:8080
export CLUSTER_NAME=WG243
export SERVICE=RANGER
export COMPONENT_HOST=wg03.wg.com
export COMPONENT_NEW_HOST=wg01.wg.com
Ensure Ranger Service is stopped. You may use below command should you need.
curl -ik -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop Ranger via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' "$AMBARI_URL/api/v1/clusters/$CLUSTER_NAME/services/$SERVICE"
BackUp Ranger Database
Stop ambari and BackUp Ambari Database
Migrating Ranger Admin Component :
DELETE RangerAdmin COMPONENT ON the current HOST
curl -ik -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X DELETE "$AMBARI_URL/api/v1/clusters/$CLUSTER_NAME/hosts/$COMPONENT_HOST/host_components/RANGER_ADMIN"
Add RangerAdmin COMPONENT ON the new HOST
curl -ik -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X POST -d '{"host_components" : [{"HostRoles":{"component_name":"RANGER_ADMIN"}}] }' "$AMBARI_URL/api/v1/clusters/$CLUSTER_NAME/hosts?Hosts/host_name=$COMPONENT_NEW_HOST"
Head to ambari UI - to the new host page and click Re-install under Ranger admin as shown below:
Update Configuration Ambari ->Ranger->Configs>Advanced > "External URL" to reflect new hostname
Start Ranger admin from Ambari UI
Check ranger admin logs /var/log/ranger/admin/xa_portal.log for any errors.
Until HDP 2.4, Due to bug RANGER-1073, you may need to do chown ranger:ranger /etc/ranger/admin/.rangeradmin.jceks.crc on new ranger admin server and restart ranger admin.
Ensure Ranger Admin UI is accessible then proceed to other components.
Migrating Ranger UserSync Component:
Delete ranger UserSync component on the current host:
curl -ik -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X DELETE "$AMBARI_URL/api/v1/clusters/$CLUSTER_NAME/hosts/$COMPONENT_HOST/host_components/RANGER_USERSYNC"
Add RANGER UserSync COMPONENT ON the new HOST
curl -ik -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X POST -d '{"host_components" : [{"HostRoles":{"component_name":"RANGER_USERSYNC"}}] }' "$AMBARI_URL/api/v1/clusters/$CLUSTER_NAME/hosts?Hosts/host_name=$COMPONENT_NEW_HOST"
Start Ranger UserSync from Ambari UI
Ensure /var/log/ranger/usersync/usersync.log does not throw any unusual errors and syncs to ldap system every 5 to 10mins. Until HDP 2.4, Due to RANGER-1073, you may need to do chown ranger:ranger /usr/hdp/current/ranger-usersync/conf/.ugsync.jceks.crc on new ranger usersync server and restart One issue I encountered was with following error
UnixAuthenticationService [main] - ERROR: Service: UnixAuthenticationService
java.io.IOException: Keystore was tampered with, or password was incorrect
I moved out the jceks files under usersync conf, it usersync started fine with no errors. mv /etc/ranger/usersync/conf/ugsync.jceks /etc/ranger/usersync/conf/unixauthservice.jks /tmp/ Migrating Ranger TagSync Component
This component is available starting from HDP 2.5 and can be deleted from Ambari UI under the host page of current host that has tagsync. and then be installed on a different host using "Add" option. You may also use this command to remove it using rest api should you need to curl -ik -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X DELETE "$AMBARI_URL/api/v1/clusters/$CLUSTER_NAME/hosts/$COMPONENT_HOST/host_components/RANGER_TAGSYNC"
Restart all dependent services such as HDFS, YARN, HBASE, KAFKA etc
You may start/test with one component restart to ensure it is checking in for policies. For example, check namenode log for any errors connecting to new ranger admin host. and confirm from ranger ui audit page, plugin section for latest entry. Please provide any feedback/comment or even better Vote if this article helped you.
... View more
Labels:
04-19-2017
11:52 AM
1 Kudo
Use Cases:
Seperate out hiveservers for batch jobs which can ensure user jobs or too many connections from users dont effect batch workloads or one groups workloads dont effect other group workloads. Secure hive connections through Knox which only works with http mode for hiveservers2. Custom configurations for different workloads such as different engine's MR or tez or anything else.
Method: Use ambari config groups to customize the default hive.server2.zookeeper.namespace
to something else other than default value “hiveserver2” Reference: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_hadoop-high-availability/content/ha-hs2-service-discovery.html http://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-operations/content/using_host_config_groups.html Caveat: The beeline connection string will be different from the default one. The new one needs different value for the zooKeeperNamespace which can cause confusion.
Example: I will be using Ambari 2.4.2 with HDP 2.3.2 but should work with anything higher. In this cluster we have two hosts wg-pm1 and wg-pm3 that
have hiveserver2’s installed and running with default configuration. Below, I will update the configurations such that
hiveserver2 on wg-pm1 will run in hive.server2.zookeeper.namespace
hiveserver2-batch. Click on the override button and enter a name for ambari hive
configuration group. One host can only be part of one ambari config group so
choose the name wisely. If you already have existing config group you would like to
use, you may use that. I choose to use new hive configuration group with name HiveServer2-batch Click “Manage Hosts” to Update the hosts that are part of
the new config group. Then chose the hiveserver-batch and add the host and save Now ambari should prompt to customize the hive.server2.zookeeper.namespace
option I chose to use hiveserver2-batch as the zookeeper namespace
for hiveserver2 and click save. Ambari should prompt to restart the affected hiveservers,
after restart, dedicated hiveservers can be discovered only through the new zookeeper
namespace. My default beeline jdbc string is as below: beeline -u "jdbc:hive2://wg-pm1.wg.com:2181,wg-pm2.wg.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" To connect to the dedicated hiveservers we will need to use customized value of hive.server2.zookeeper.namespace for the value of zooKeeperNamespace beeline -u
"jdbc:hive2://wg-pm1.wg.com:2181,wg-pm2.wg.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-batch"
... View more
Labels:
03-24-2017
04:03 AM
1 Kudo
Just had a new idea, that probably can solve the problem We can have different account names such as hive-clusterA/hostname@realm.com hive-clusterB/hostname@realm.com And then have an auth_to_local rule in clusterA which converts hive-clusterA to hive and vice versa in clusterB. Very similar to how "dn", "nn","nm" principals get resolved. Cheers
... View more
03-22-2017
09:42 PM
2 Kudos
@Roland Simonis @Robert Levas My thoughts from my experience First, I would start with compromising on hive.keytab itself is a major security risk, which isn't typical or there should be restrictions put in place to prevent that.
Second, we can chose to use seperate REALMS for the clusters in which case the rules will be specific to individual cluster. Third,we can remove "DEFAULT" rule from auth_to_local and then manually code for needed principals.
more details: https://hortonworks.com/blog/fine-tune-your-apache-hadoop-security-settings/
I am sure we can be more creative on auth_to_local for cluster specific rules and principals, but to simplify and still be secure would be to have seperate realms. With the given scenario, that seems appropriate since they are part of the same kerberos realm with the default configurations. The rules are necessary to find the hdfs directories , hdfs folder/file permissions or hive table owners and many others which use short name, I dont think ranger can help with that. Let me know your thoughts Cheers
... View more
06-09-2016
09:52 PM
Yes http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_HDP_RelNotes/content/ch_relnotes_v242.html
... View more
06-07-2016
01:10 AM
1 Kudo
No. the above command is to be used inside beeline. I am assuming you are trying to connect to phoenix python shell in which case these are the commands. these are the commands in linux shell using which I was able to use to get in to phoenix shell in kerberized cluster. export HBASE_CONF_PATH=/etc/hbase/conf:/etc/hadoop/conf if you are logged in as your own user kinit
/usr/hdp/current/phoenix-client/bin/sqlline.py hostname.domain.com:2181:/hbase-secure if using logged in as hbase user kinit -k -t /etc/security/keytabs/hbase.headless.keytab hbase
/usr/hdp/current/phoenix-client/bin/sqlline.py hostname.domain.com:2181:/hbase-secure:hbase@domain.com:/etc/security/keytabs/hbase.headless.keytab then you can run following commands to show list of phoenix tables !tables
... View more
06-06-2016
10:35 PM
No you donot need !connect in hbase shell. only "hbase shell"
... View more
06-06-2016
10:19 PM
@Sri Bandaru I would say, follow this page https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/validating-phoenix-installation.html jdbc:phoenix:<Zookeeper_host_name>:<port_number>:<secured_Zookeeper_node>:<principal_name>:<HBase_headless_keytab_file> try this !connect jdbc:phoenix:zkhostname.domain.com:2181:/hbase-secure:hbase@HDP.DOMAIN.COM:/etc/security/keytabs/hbase.headless.keytab you had an extra "phoenix:jdbc" in the command let me know if it worked.
... View more