Member since
01-27-2016
14
Posts
8
Kudos Received
0
Solutions
02-22-2017
12:19 AM
Ranger audits the hdfs files and folders for which there is policy to audit, other request falls back to HDFS acl. Audit folders in hdfs for ranger audits are owned by respective component's super user ( when enabling plugin it gets created accordingly) and it has necessary hdfs acl to create the audit logs. Hence there is no circular dependency on this to audit back all the audits written into HDFS. As far as I know you can also start Ranger after HFDS is available, only thing while starting HDFS via Ambari, start service does checks which might take sometime before it come up, there is no relation to HDFS being Ranger's audit sink.
... View more
04-06-2016
02:46 PM
1 Kudo
Hi @Anna Shaverdian, In addition to what has already been posted, I wanted to mention something about Blueprint exports. In general, you should be able to re-use an exported Blueprint from a running cluster without any changes. There are a few exceptions,however: 1. Passwords: Any password properties are filtered out of the exported Blueprint, and must be added in explicitly during a redeployment attempt. 2. External Database Connections: I'm defining an "External" DB connection as any DB that is used by the cluster, but is not managed by Ambari. The DB instance used by Ranger is one example, but Oozie and Hive can also use separate instances that are not managed by Ambari. In these cases, the Blueprint export process filters out these properties, since they are not necessarily portable to the new environment. From what I've seen in this posting, these are the kinds of properties that you'll need to add back into your Blueprint or Cluster Creation template, as the other posts have indicated. Hope this helps, Thanks, Bob
... View more
02-11-2016
10:23 PM
1 Kudo
@Anna Shaverdian 1] For this, existing keys need to be imported into Ranger KMS (using a script provided by Ranger KMS) 2] Please check your KMS repo configuration. Looks like you are using kerberos, but the repo config user name is not a valid kerberos user. Please refer the docs here. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_Ranger_KMS_Admin_Guide/content/ch02s01s03.html Since KMS repo is already created, username needs to be changed directly in the ranger UI, not in Ambari.
... View more
03-16-2017
06:18 PM
I am having similar issue We have non Kerberiozed Hadoop Kafka environment . I am
testing integrating Ranger Kafak to secure the environment. HDP Version: HDP-2.3.4.0-3485 This is what I did. -- Enables Kafka plugin in Ranger. -- Restarted Ranger -- Create following policies in Ranger ( see the image ) ( Important : Added group
Public left policy condition blank ) -- Logged in to
server 21 to Produce and consume message's -- I was able to produce and consume messages from any
server . What we want is to secure our Kafka environment through
ranger by ip address. I understand that the identity of client user over a
non-secure channel is not possible. I followed the following article to secure or Kafka environment. https://cwiki.apache.org/confluence/display/RANGER/Kafka+Plugin#KafkaPlugin-WhydowehavetospecifypublicusergrouponallpoliciesitemscreatedforauthorizingKafkaaccessovernon-securechannel Please let me know what I am missing.
... View more