Member since
09-25-2015
33
Posts
41
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2826 | 04-11-2016 07:43 PM | |
5022 | 01-13-2016 01:27 AM | |
11245 | 12-17-2015 03:29 AM | |
3521 | 12-16-2015 11:13 PM | |
1165 | 12-08-2015 04:54 PM |
12-08-2015
04:54 PM
2 Kudos
In Kafka, topic creation and deletion is still done directly at the ZooKeeper level and doesn't go through Broker. If you are using HDP, then OOTB, only principal "kafka" has permission to do these operations. In future releases, Kafka community will support creation of Topics via Broker. Till that time, there is not much option, but to manage the creation/delete permissions using ZooKeeper ACLs
... View more
12-08-2015
01:45 AM
1 Kudo
I have found issues when you are using the latest MySQL 5.7 with Ranger. To work around, you need to do the following in /etc/my.cnf show_compatibility_56 = on explicit_defaults_for_timestamp
... View more
11-26-2015
06:36 PM
1 Kudo
All of @Andrew Grande points are value. You should also consider the performance impact when you store in HDFS, because Solr pulls indexes from HDFS and keeps it in memory. So you will have to plan your hardware capacity carefully
... View more
11-26-2015
06:33 PM
1 Kudo
For your question specific to storing Ranger Audits, if you envision lot of audit logs will be generated, then you should create multiple shards with enough replication factors for high available and performance. Another recommendation is to store Ranger Audits in both HDFS and Solr. HDFS storage will be for archival and compliance reason. On the Solr side, you can setup maximum retention to delete the audit logs after certain number of days.
... View more
11-06-2015
11:26 PM
12 Kudos
Since you brought up this blog, there are 3 things you need to know. 1. Authentication, 2. User/Group Mapping and 3. Authorization 1. For authentication, there is no alternative for Kerberos. Once your cluster is Kerberized, you can make it easier for certain access path by using AD/LDAP. Example, access to HS2 via AD/LDAP authentication or accessing various services using Knox. 2. Group mapping can be done in 3 ways. One as the blog says, where you lookup AD/LDAP to get the groups for the users. Second is to materialize the AD/LDAP users on the linux server using SSSD, Centrify, etc. Third is to manually create the users and groups in the linux env.All these options are applicable regardless whether you have Kerberos or not. 3. Authorization can be done via Ranger or using the natively supported ACL. Except Storm and Kafka, having Kerberos is not mandatory. Without reliable authentication, authorization and auditing is meaningless. Common use case as yours: User A logs into the system with his AD credentials, HDFS or Hive ACL's kicks in for authorization. You have to qualify "system". Which system are you logging in? Only HS2 and Knox allows you to login via AD/LDAP. If you are planning to do that, then you have to setup a very tight firewall around your Hadoop cluster. Absolutely no one should be able to connect to the NameNode, DataNode or any other service port from outside the cluster, except to the JDBC port of HS2 or Knox port. If you can setup this firewall, then all business users will be secure even if don't kerberize your cluster. However, any user who has shell login/port access to edge node/cluster or able to submit a custom job in the cluster will be able to impersonate anyone. Setting up this firewall is not a trivial thing. Even if you do, there will be users who will need access to the cluster. There should be limited number of such users and these users should be trusted. And you should not let any un-approved job running within the cluster. If the customer is okay with all the "ifs" and comfortable with limited number of super admin users, then yes you can have security without Keberos.
... View more
10-14-2015
05:29 PM
@rgarcia@hortonworks.com If the admin user is synchronized from AD, then you will have to update the Ambari DB and update it. You should probably create a backup admin user with different name with Admin privileges in Ambari. mysql> use ambaricustom mysql> update users set ldap_user=0 where user_name='admin';
... View more
10-07-2015
10:38 PM
All the APIs from Hadoop KMS should work with RangerKMS also. We could a make a note of it in the documentation.
... View more
10-07-2015
09:45 PM
1 Kudo
Yes you will be able to rollover the Encryption Zone Key (EZKey). EZKey is used to encrypt the key used to encrypt the data/file. There is one active EZ key per encryption zone. You can rollover the EZKey as needed and new EEK (File Keys) will be encrypted with the new key. However file/data keys encrypted with older keys will not be rekeyed. Since the EZKeys are versioned, older EEK will be decrypted with appropriate version. So everything works seamlessly.
... View more
10-02-2015
03:45 PM
1 Kudo
Preferred way is to create a file like ranger-admin-env00-java_mem.sh in /etc/ranger/admin/conf and have the following values: JAVA_OPTS=" -XX:MaxPermSize=256m -Xmx1024m " export JAVA_OPTS
In this way, even when Ranger is upgraded, your custom overrides are still there.
... View more
- « Previous
- Next »