Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
16075 | 01-18-2018 08:38 AM | |
1997 | 05-11-2017 06:50 PM | |
10394 | 04-28-2017 11:00 AM | |
4115 | 04-12-2017 01:36 AM | |
3210 | 02-14-2017 05:11 AM |
03-10-2016
04:57 PM
2 Kudos
Hello, after switching the port for Ranger-Admin, Ambari is showing an alert for checking the Ranger-Admin because it is still trying to connect to Ranger-Admin on port 6080... How can I reconfigure this Ambari-check, so that it is connecting to Ranger-Admin at the correct port (now it is 6182) ? Thanks in advance, Gerd PS: this is now the case for ~5 hours, in the beginning I thought to give Ambari some minutes... 😉 HDP 2.2.4.2 Ambari 2.0.1
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
03-07-2016
10:05 AM
2 Kudos
To disable SSLv3 for old Hue version, do the following: 1.) open file "/usr/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py" 2.) add the following line, after line no. 1669: ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3) 3.) restart Hue To check if Hue replies on SSLv3 or not you can use this one: openssl s_client -connect <hue-server-name>:<hue-port> -ssl3 Here the corresponding GitHub patch, thanks to @cdraper: https://github.com/cloudera/hue/commit/0060abf9aae0049c082c9948658eea7df848ab6e
... View more
Labels:
03-07-2016
09:25 AM
Solution found, and published here: https://community.hortonworks.com/articles/21650/disable-sslv3-for-hue-v261.html
... View more
03-07-2016
09:25 AM
Hi @Andy LoPresto, many thanks for your reply. Since we are not using apache-httpd for that, I applied the one-line-patch for disabling SSLv3 from the embedded webserver (see my answer).
...and on the long run I want to switch to AmbariViews anyways 😉
... View more
03-02-2016
07:49 PM
2 Kudos
Hello, due to security constraints I have to disable SSLv3 for all the Web-UIs available in the stack. For Ambari/Ranger/Knox this isn't a problem at all, but I have no clue how to disable SSLv3 in Hue (used version is 2.6.1-2, yes, pretty old 😉 ). Any hint highly appreciated. Thanks, Gerd
... View more
Labels:
- Labels:
-
Cloudera Hue
02-14-2016
04:45 PM
1 Kudo
Hi, after some searching (and thanks to this post), the SSL truststore access problem is solved. Just replace the value for "trustStorePassword" by your knox-master-secret set during installation of Knox.
... View more
02-14-2016
02:50 PM
4 Kudos
Hi, I am trying to connect to Hive through Knox, via beeline (HDP2.2.4, Knox0.4) Based on http://hortonworks.com/hadoop-tutorial/secure-jdbc-odbc-clients-access-hiveserver2-using-apache-knox/ I set the described config parameters accordingly, but the chapter with the SSL certification I don't know what is meant there. I have to use a self-signed certificate, therefore I just tried exactly the same sslTrustStore and sslTrustStorePassword values as in the document, but it is failing with: 16/02/14 15:40:11 [main]: WARN jdbc.Utils: ***** JDBC param deprecation *****
16/02/14 15:40:11 [main]: WARN jdbc.Utils: The use of hive.server2.transport.mode is deprecated.
16/02/14 15:40:11 [main]: WARN jdbc.Utils: Please use transportMode like so: jdbc:hive2://<host>:<port>/dbName;transportMode=<transport_mode_value>
16/02/14 15:40:11 [main]: WARN jdbc.Utils: ***** JDBC param deprecation *****
16/02/14 15:40:11 [main]: WARN jdbc.Utils: The use of hive.server2.thrift.http.path is deprecated.
16/02/14 15:40:11 [main]: WARN jdbc.Utils: Please use httpPath like so: jdbc:hive2://<host>:<port>/dbName;httpPath=<http_path_value>
Error: Could not create an https connection to jdbc:hive2://<knox-host>:8443/;ssl=true;sslTrustStore=/var/lib/knox/data/security/keystores/gateway.jks;trustStorePassword=knox?hive.server2.transport.mode=http;hive.server2.thrift.http.path=gateway/default/hive. Keystore was tampered with, or password was incorrect (state=08S01,code=0) My connect string: beeline> !connect jdbc:hive2://<knox-host>:8443/;ssl=true;sslTrustStore=/var/lib/knox/data/security/keystores/gateway.jks;trustStorePassword=knox?hive.server2.transport.mode=http;hive.server2.thrift.http.path=gateway/default/hive The referenced documentation says in Step 4: In the example here, I am connecting to Knox on HDP 2.1 Sandbox which uses a self-signed certificate for SSL. I have exported this certificate to a file in /root/truststore.jks and set a password to this file But what exactly means "this certificate" and what is its password to export it into another file, some default values there ?!?! What am I missing to create a beeline-via-Knox connection successfully ?!?!
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Knox
02-11-2016
01:30 PM
8 Kudos
While enabling SSL for Ranger (I am writing for Ranger 0.4 from HDP2.2.4, configured by Ambari) I ran into issues loosing some days. The critical points are verify that property "Common Name For Certificate" in Ranger policy definition AND in Plugin configuration is matching the DN of your used certificate if you use HA enabled services like Namenode-HA, you have to use the same cert (at least) throughout both Namenodes to match the "Common Name for Certificate".
!! Do not use the servers FQDN for the DN property at certification creation time !! This is the step-by-step instruction what I did to make it work. Please keep in mind that I will use ONE certificate throughout all nodes. If that does not match your security criteria, you have to adapt the steps accordingly create Ranger-admin keystore
cd /etc/ranger/admin/conf/
sudo /usr/java/jdk1.7.0_79/bin/keytool -genkey -keyalg RSA -alias ranger-admin -keystore ranger-admin-keystore.jks -validity 360 -keysize 2048 -storepass <password> export Ranger-admin key to .cer file and distribute that throughout all cluster nodes
sudo /usr/java/jdk1.7.0_79/bin/keytool -export -keystore /etc/ranger/admin/conf/ranger-admin-keystore.jks -alias ranger-admin -file ranger-admin-trust.cer
# copy the .cer to all the other hosts ssh to one master, e.g. the active namenode create certificate ( this will be used on all other hosts as well)
cd /etc/hadoop/conf/
sudo /usr/java/jdk1.7.0_79/bin/keytool -genkey -noprompt -dname "CN=commonname, OU=test, O=test" -keyalg RSA -alias rangeragent -keystore ranger-agent-keystore.jks -validity 360 -keysize 2048 -storepass <password>
# keep the storepass in mind for configuring the plugins later on in Ambari
# important is the "commonname", you need this value in Ranger repository definition and in the Ranger plugin configuration export that cert and disribute it to all nodes in the cluster
sudo /usr/java/jdk1.7.0_79/bin/keytool -export -keystore /etc/hadoop/conf/ranger-agent-keystore.jks -alias rangeragent -file ranger-agent.cer create a truststore ON ALL NODES for the ranger-admin cert (the one from step 2. )
sudo /usr/java/jdk1.7.0_79/bin/keytool -import -file ranger-admin-trust.cer -alias rangeradmintrust -keystore /etc/hadoop/conf/ranger-admin-truststore.jks -storepass <password>
# do this on all nodes where ranger plugins will become active import the "client" cert (the one from step 4. ) on the ranger-admin node into the default java keystore "cacerts"
ssh <ranger-admin-node>
cd /etc/ranger/admin/conf/
sudo /usr/java/jdk1.7.0_79/bin/keytool -import -file ranger-agent.cer -alias rangeragent -keystore /usr/java/jdk1.7.0_79/jre/lib/security/cacerts -storepass <cacerts-password>
Now that the underlying SSL stuff has been setup you can proceed configuring Ranger and the Ranger-plugins in Ambari by providing: keystore_file_path = /etc/hadoop/conf/ranger-agent-keystore.jks truststore_file_path = /etc/hadoop/conf/ranger-admin-truststore.jks also the corresponding password properties and the property "common.name.for.certificate" = commonname Restart the services and have fun configuring Ranger policies 😄 Check the latest timestamp of the Agents in Ranger=>Audit=>Agents to verify that all the plugins received the latest policies
... View more
Labels:
02-11-2016
12:42 PM
1 Kudo
Hello, just to update you on the real solution of the problem: it was causes by an underlying SSL cert. issue after enabling Ranger-HTTPS. The issue got solved by importing the ranger-admin trust into the java keystore "/usr/java/jdk1.7.0_79/jre/lib/security/cacerts" Assuming you have a Ranger cert in /etc/ranger/admin/conf/ranger-admin-keystore.jks, then: sudo /usr/java/jdk1.7.0_79/bin/keytool -export -keystore /etc/ranger/admin/conf/ranger-admin-keystore.jks -alias ranger-admin -file ranger-admin-trust.cer
sudo /usr/java/jdk1.7.0_79/bin/keytool -import -file /etc/hadoop/conf/ranger-admin-trust.cer -alias ranger-admin -keystore /usr/java/jdk1.7.0_79/jre/lib/security/cacerts
#followed by a Ranger- and usersync-restart
... View more
02-10-2016
01:23 PM
1 Kudo
Hi @Artem Ervits , @Neeraj Sabharwal , at the end, using Ranger policies for Hive-on-top-of-HBase works as supposed to do so, by defining Hive-Policy and HBase-Policy for the involved tables. The issue I had was the following, although I really don't understand why it is like it is: switching back to Ranger-HTTP from HTTPS left the policy_mgr_url starting with HTTPS://<ranger-admin>:<port>; on the HBase-REGIONSERVERS, thereby the REGIONSERVERS were complaining that they cannot grab latest Ranger policies due to SSL error. This was the reason why my HBase policies were never applied, because they never got fetched by the REGIONSERVERS. Now the point that is confusing me: why the REGIONSERVERS ???? On the HBase-Master nodes there was no error, they had received the latest HBase-policies and therefore in the Ranger-Audit the agents heartbeat has been updated (and therefore I thought everything's fine). Isn't it the similar behaviour of Ranger-plugin like in HDFS, that the plugin just hooks into the "master"-process Namenode , what is the role of Ranger-in-Regionserver here ?
... View more