Member since
10-19-2015
279
Posts
340
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2218 | 05-12-2017 10:12 AM | |
3933 | 04-03-2017 11:13 AM | |
1175 | 03-28-2017 05:26 PM | |
2670 | 03-06-2017 12:31 PM | |
148927 | 03-02-2017 08:24 AM |
09-08-2022
07:16 PM
Think u ,it works!🤗
... View more
05-28-2020
12:55 AM
Works for me. +1 on enabling SolrCloud. (Ambari -> Ranger -> Configs -> Ranger Audit -> Audit to SolrCloud: ON)
... View more
06-28-2018
07:08 AM
this Article explain the additional steps required to configure wire encryption by exporting/importing the certificates across the cluster for distcp to work on wire encrypted multi cluster . Problem: on wire encrypted multi cluster environment distcp fails if steps given in this article are not performed, we may see ssl error as follows:
javax.net.ssl.SSLHandshakeException: DestHost:destPort <KMS_HOST>:9393 , LocalHost:localPort null:0. Failed on local exception: javax.net.ssl.SSLHandshakeException: Error while authenticating with endpoint: https://<KMS_HOST>e:9393/kms/v1/?op=GETDELEGATIONTOKEN&rene.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) Prerequisites: 1) two cluster should be setup with Ranger 2) wire encryption should be enabled on the both clusters already.
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/enabling-ssl-for-components.html 3) if Ranger kms is installed then wire encryption should be enabled for ranger kms too in both the clusters. steps to configure SSL for distcp to work in multi cluster: 1) export the certificate from Hadoop server key store file on all the host part of the cluster1 and cluster2. cd <server_hadoop_key_location>;keytool -export -alias hadoop_cert_<host_name> -keystore <keystore_file_path> -rfc -file hadoop_cert_<host_name> -storepass <keystore_password> Note: if you don't know the location of the key store, then you can search for config "ssl.server.keystore.location" in the hdfs config
2) copy all the certificates generated for cluster1 in previous step from cluster1 hosts to client key location on all the hosts part of cluster2.
and similarly copy all the certificates generated for cluster2, from cluster2 hosts to client key location on all the host part of cluster1 3) Import the all the cluster1 certificates to the hadoop client trustore on all the host of cluster2 and vice versa. cd <client_hadoop_key_location>;keytool import -noprompt -alias hadoop_cert_<host_name> -file hadoop_cert_<host_name> -keystore <truststore_file_path> -storepass <truststore_password> Note: if you don't know the location of the truststore, then you can search for config "ssl.client.truststore.location" in the hdfs config Additional steps if Ranger Kms is installed: if ranger kms is installed then we need to export the ranger kms certificate from ranger kms hosts of cluster1 to Hadoop client trust store of cluster2 1) export the certificate from Ranger kms server key store file on kms hosts part of the cluster1 and cluster2. cd <kms_key_store_location>;keytool -export -alias kms_cert_<host_name> -keystore <kms_keystore_file_path> -rfc -file kms_cert_<host_name> -storepass <kms_keystore_password> Note: if you don't know the location of the kms key store, then you can search for config "ranger.https.attrib.keystore.file" in the kms config
2) copy all the certificates generated for kms in cluster1 in previous step from cluster1 kms hosts to client key location on all the hosts part of cluster2.
and similarly copy all the certificates generated for kms in cluster2, from cluster2 kms hosts to client key location on all the host part of cluster1 3) Import all the cluster1 kms certificates to the Hadoop client trust store on all the host of cluster2 and vice versa. cd <client_hadoop_key_location>;keytool import -noprompt -alias kms_cert_<host_name> -file kms_cert_<host_name> -keystore <truststore_file_path> -storepass <truststore_password> Now restart Hdfs, Yarn, Mapreduce and Ranger KMS on both the cluster and once both the services successfully started, try distcp it should work fine. hadoop distcp -Dmapreduce.job.hdfs-servers.token-renewal.exclude=cluster1 -skipcrccheck -update /distcp_cluster1 hdfs://cluster2/distcp_cluster2/
... View more
Labels:
05-16-2018
12:59 PM
I had similar issue and fixed with the following ambari repo. wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.6.1.5/ambari.repo -O /etc/yum.repos.d/ambari.repo
... View more
04-02-2018
05:44 AM
1 Kudo
Express or rolling upgrade stops all the services including Knox, but it does not impact upgrade process since client would have got cookie and it will be able to access ambari until the current session is alive or cookie is not lost/expired. Resoultion if cookie is lost/session is expired: client should use local login to access the Ambari and proceed further. eg: <ambari_host:ambari_port>/#/login/local
... View more
03-13-2018
01:32 PM
1 Kudo
prerequisites: 1) You must have an MySQL Server database instance running to be used by Ranger. 2) Execute the following command on the Ambari Server host. Replace database-type with mysql|oracle|postgres|mssql|sqlanywhere and /jdbc/driver/path based on the location of corresponding JDBC driver: ambari-server setup --jdbc-db={database-type} --jdbc-driver={/jdbc/driver/path} 3) make sure root user have access to the db from ranger host and ranger kms host. eg: your ranger host is ranger.host.com & ranger kms host is ranger.kms.host.com
then you should run following command on the mysql database: GRANT ALL ON *.* TO 'root'@'ranger.host.com' IDENTIFIED BY '<root_password>' WITH GRANT OPTION; GRANT ALL ON *.* TO 'root'@'ranger.kms.host.com' IDENTIFIED BY '<root_password>' WITH GRANT OPTION; flush privileges; Steps: 1) go to the ambari and go to the add service wizard 2) select the ranger and ranger kms host 3) make sure you fill following properties carefully in Ranger & Ranger KMS both: DB_FLAVOR = mysql
Ranger DB host = <db_host> eg: test.mysql.com
Setup Database and Database User = yes
Database Administrator (DBA) username = root
Database Administrator (DBA) password = <root_password> 4) any name can be configured to following db or user properties as they will be created by root user as fresh database or user since we selected "Setup Database and Database User" to yes Ranger DB name = ranger
Ranger DB username = rangeradmin
Ranger DB password = rangeradmin 5) Ranger KMS has an additional property: KMS master key password: <kms_password> , this is also newly configured password of your choice. 6)) audit can configured based on your choice if you want audit for service operations, Note: I have not given details of all the properties in this article because these are the important properties where people make mistakes. ,
... View more
Labels:
03-16-2018
10:25 AM
Hi @Jinyu Li your issue is likely produced by Hive Permission Inheritance. After creating the tables, the Sqoop app tries to change the owner/mode of the created HDFS files. Ranger permissions (even rwx) do not give rights to change POSIX owner/mode, which is why the operation fails. Such failure is classified as "EXECUTE" action by Ranger. You can find more details in the HDFS Audit log, stored locally on the NameNode. Solution: Could you please try to set "hive.warehouse.subdir.inherit.perms" to false and re-run the job? This stops Hive Imports from trying to set permissions, which is fine when Ranger is the primary source of authorization. see https://cwiki.apache.org/confluence/display/Hive/Permission+Inheritance+in+Hive for more details. Best, Benjamin
... View more
03-01-2018
12:08 PM
Thank You @Sharmadha Sainath , It is working Fine 🙂
... View more
03-15-2018
09:45 PM
That's correct @GN_Exp. If you want to do SLA in Knox via Ranger plugin then you'd need kerberos too.
... View more
02-27-2018
03:19 PM
2 Kudos
Prakash Punj Audit source from HDFS is not supported at Ranger end. However, you can store audits in HDFS through plugins. so if you want to get audits on ranger UI, you need to change the audit source to solr and store the audits to solr.
... View more