Member since
10-19-2015
279
Posts
340
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2164 | 05-12-2017 10:12 AM | |
3757 | 04-03-2017 11:13 AM | |
1121 | 03-28-2017 05:26 PM | |
2594 | 03-06-2017 12:31 PM | |
146431 | 03-02-2017 08:24 AM |
06-28-2018
07:08 AM
this Article explain the additional steps required to configure wire encryption by exporting/importing the certificates across the cluster for distcp to work on wire encrypted multi cluster . Problem: on wire encrypted multi cluster environment distcp fails if steps given in this article are not performed, we may see ssl error as follows:
javax.net.ssl.SSLHandshakeException: DestHost:destPort <KMS_HOST>:9393 , LocalHost:localPort null:0. Failed on local exception: javax.net.ssl.SSLHandshakeException: Error while authenticating with endpoint: https://<KMS_HOST>e:9393/kms/v1/?op=GETDELEGATIONTOKEN&rene.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) Prerequisites: 1) two cluster should be setup with Ranger 2) wire encryption should be enabled on the both clusters already.
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/enabling-ssl-for-components.html 3) if Ranger kms is installed then wire encryption should be enabled for ranger kms too in both the clusters. steps to configure SSL for distcp to work in multi cluster: 1) export the certificate from Hadoop server key store file on all the host part of the cluster1 and cluster2. cd <server_hadoop_key_location>;keytool -export -alias hadoop_cert_<host_name> -keystore <keystore_file_path> -rfc -file hadoop_cert_<host_name> -storepass <keystore_password> Note: if you don't know the location of the key store, then you can search for config "ssl.server.keystore.location" in the hdfs config
2) copy all the certificates generated for cluster1 in previous step from cluster1 hosts to client key location on all the hosts part of cluster2.
and similarly copy all the certificates generated for cluster2, from cluster2 hosts to client key location on all the host part of cluster1 3) Import the all the cluster1 certificates to the hadoop client trustore on all the host of cluster2 and vice versa. cd <client_hadoop_key_location>;keytool import -noprompt -alias hadoop_cert_<host_name> -file hadoop_cert_<host_name> -keystore <truststore_file_path> -storepass <truststore_password> Note: if you don't know the location of the truststore, then you can search for config "ssl.client.truststore.location" in the hdfs config Additional steps if Ranger Kms is installed: if ranger kms is installed then we need to export the ranger kms certificate from ranger kms hosts of cluster1 to Hadoop client trust store of cluster2 1) export the certificate from Ranger kms server key store file on kms hosts part of the cluster1 and cluster2. cd <kms_key_store_location>;keytool -export -alias kms_cert_<host_name> -keystore <kms_keystore_file_path> -rfc -file kms_cert_<host_name> -storepass <kms_keystore_password> Note: if you don't know the location of the kms key store, then you can search for config "ranger.https.attrib.keystore.file" in the kms config
2) copy all the certificates generated for kms in cluster1 in previous step from cluster1 kms hosts to client key location on all the hosts part of cluster2.
and similarly copy all the certificates generated for kms in cluster2, from cluster2 kms hosts to client key location on all the host part of cluster1 3) Import all the cluster1 kms certificates to the Hadoop client trust store on all the host of cluster2 and vice versa. cd <client_hadoop_key_location>;keytool import -noprompt -alias kms_cert_<host_name> -file kms_cert_<host_name> -keystore <truststore_file_path> -storepass <truststore_password> Now restart Hdfs, Yarn, Mapreduce and Ranger KMS on both the cluster and once both the services successfully started, try distcp it should work fine. hadoop distcp -Dmapreduce.job.hdfs-servers.token-renewal.exclude=cluster1 -skipcrccheck -update /distcp_cluster1 hdfs://cluster2/distcp_cluster2/
... View more
Labels:
04-01-2018
07:45 PM
1 Kudo
Labels:
- Labels:
-
Apache Knox
03-13-2018
01:32 PM
1 Kudo
prerequisites: 1) You must have an MySQL Server database instance running to be used by Ranger. 2) Execute the following command on the Ambari Server host. Replace database-type with mysql|oracle|postgres|mssql|sqlanywhere and /jdbc/driver/path based on the location of corresponding JDBC driver: ambari-server setup --jdbc-db={database-type} --jdbc-driver={/jdbc/driver/path} 3) make sure root user have access to the db from ranger host and ranger kms host. eg: your ranger host is ranger.host.com & ranger kms host is ranger.kms.host.com
then you should run following command on the mysql database: GRANT ALL ON *.* TO 'root'@'ranger.host.com' IDENTIFIED BY '<root_password>' WITH GRANT OPTION; GRANT ALL ON *.* TO 'root'@'ranger.kms.host.com' IDENTIFIED BY '<root_password>' WITH GRANT OPTION; flush privileges; Steps: 1) go to the ambari and go to the add service wizard 2) select the ranger and ranger kms host 3) make sure you fill following properties carefully in Ranger & Ranger KMS both: DB_FLAVOR = mysql
Ranger DB host = <db_host> eg: test.mysql.com
Setup Database and Database User = yes
Database Administrator (DBA) username = root
Database Administrator (DBA) password = <root_password> 4) any name can be configured to following db or user properties as they will be created by root user as fresh database or user since we selected "Setup Database and Database User" to yes Ranger DB name = ranger
Ranger DB username = rangeradmin
Ranger DB password = rangeradmin 5) Ranger KMS has an additional property: KMS master key password: <kms_password> , this is also newly configured password of your choice. 6)) audit can configured based on your choice if you want audit for service operations, Note: I have not given details of all the properties in this article because these are the important properties where people make mistakes. ,
... View more
Labels:
03-12-2018
06:05 AM
I think you are using LDAP that comes with knox , can you please check if it is up and running ? and if yes then check for that admin user password in users.ldif file in /etc/knox/conf. by defaut it is admin-password
... View more
03-11-2018
06:57 PM
GN_Exp is there ranger knox policy present to allow access to admin user the access of ?
... View more
03-07-2018
08:46 AM
Jinyu Li can you please try reading this file through direct hdfs call, hdfs dfs -cat , in this case it will use READ_EXECUTE, try this call first through yarn user and then hive user, and check if it works for both of them
... View more
03-07-2018
03:05 AM
one more ques, are both the policies you shared present at the same time when you are running this operation ?
... View more
03-07-2018
02:36 AM
is there any deny policy item present in your resource or tag based policy ?
... View more
03-06-2018
04:00 PM
I see the enforcel as hadoop acl wherever i see execute permission logs "enforcer":"hadoop-acl". I think there is gap in ranger policy give to allow this operation , can you please put screenshot of ranger policy that you think should have allowed this operation.
... View more
02-27-2018
08:25 PM
1 Kudo
@GN_Exp is this cluster an unsecure cluster ? if so the can you please validate the username and password given in the knox repo ?
... View more