Member since
04-09-2019
254
Posts
140
Kudos Received
34
Solutions
03-19-2022
10:16 PM
Thank you @adhishankarit for sharing this with us. This will be useful for other person as well. Many reader will get benefit from this.
... View more
03-10-2022
11:08 PM
Hello @RajeshReddy , DataSteward role would usually grant “environments/adminRanger” permission which makes user Ranger and Atlas admin. This would suffice to create a tag based policy. Can we get more info on the error you are getting? Any screenshot or error messages would help us greatly to help you further. Thanks.
... View more
03-10-2022
12:18 AM
Hello, In order to assist with this, I'd need to check more python code which you are running. Could you please share the snippet around the POST call? Also, I noticed the RangerPDPKnoxFilter in the stack trace. This means that your Knox topology (cdp-proxy-api) has Ranger Knox plugin enabled for authorization. Can you please disable the plugin (only for testing) and try again? Hope this helps. Thanks.
... View more
06-05-2018
06:21 PM
@Karthik Palanisamy, This is really a good piece of information. Thanks for sharing. Keep them coming !
... View more
02-15-2018
08:10 PM
5 Kudos
Motivation: When Hadoop components (HDFS et. al.) are configured to connect to external sources like LDAP, the LDAP bind passwords need to be given in configuration file (core-site.xml) in clear text. For many enterprise environments, having password in clear text is not allowed and is often flagged as risk in Security Audits. This article teaches Hadoop administrators on how to secure these plaintext password in Hadoop configuration. Configuration Steps: 1. Before starting, make sure that LDAP bind is working with plain text password. So, the HDFS should be configured with these settings in core-site.xml (the following values should be changed to match your LDAP/AD environment. These are working example values from my AD setup) hadoop.security.group.mapping=org.apache.hadoop.security.LdapGroupsMapping
hadoop.security.group.mapping.ldap.base=ou=CorpUsers,dc=lab,dc=hortonworks,dc=net
hadoop.security.group.mapping.ldap.bind.user=cn=ldap-reader,ou=ServiceUsers,dc=lab,dc=hortonworks,dc=net
hadoop.security.group.mapping.ldap.bind.password=s0mePassw0rd
hadoop.security.group.mapping.ldap.search.attr.group.name=cn
hadoop.security.group.mapping.ldap.search.attr.member=member
hadoop.security.group.mapping.ldap.search.filter.group=(objectclass=group)
hadoop.security.group.mapping.ldap.search.filter.user=(objectcategory=person)
hadoop.security.group.mapping.ldap.url=ldap://myad.lab.hortonworks.net:389 Notice that the LDAP bind password is in clear text. 2. Also at this point, HDFS should be able to resolve LDAP group(s) for an LDAP user. To check, use this command - hdfs groups <username>. For example, # hdfs groups hr1
hr1 : hadoop-users hadoop-admins HDP Ranger Admins With this basic setup, we are ready to secure our plaintext password. 3. Hadoop offers Credential Provider APIs which can be used to secure various passwords (not just LDAP bind password) in secure JCEKS (Java Cryptography Extension KeyStore) files. We will use the same in this article. 4. First of all, create a JCEKS file using hadoop credential command to store property name & bind password: # hadoop credential create hadoop.security.group.mapping.ldap.bind.password -value s0mePassw0rd -provider jceks://file/etc/security/bind.jceks
hadoop.security.group.mapping.ldap.bind.password has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated. This command creates a /etc/security/bind.jceks file with encrypted password & default permission of 700. 5. Update file permission of /etc/security/bind.jceks to 755 for root user. # chmod 755 /etc/security/bind.jceks
# ls -l /etc/security/bind.jceks
-rwxr-xr-x. 1 root root 533 Feb 15 20:00 /etc/security/bind.jceks 6. Let's use this credential provider in Hadoop configuration (core-site.xml): hadoop.security.credential.provider.path=localjceks://file/etc/security/bind.jceks and remove hadoop.security.group.mapping.ldap.bind.password property as well. 7. Restart HDFS NameNode service to load new property. 8. Verify that the LDAP groups are still able to resolve for an LDAP user.
... View more
Labels:
12-20-2017
06:33 PM
@Phil Zampino, this is a really informative and valuable article. Thanks for writing. Keep it up !
... View more
09-14-2017
06:59 PM
+1. Very useful article, bookmarked. Thank you!
... View more
06-29-2017
06:31 PM
1 Kudo
Great news !!!!
... View more
06-03-2017
08:14 PM
Pretty informative and useful. Thanks @Dominika Bialek for writing this. Keep it up !!
... View more
06-02-2017
01:52 AM
4 Kudos
Unlike other services, Knox doesn't expose Java Heap settings via Ambari. Follow these steps to change default Heap settings for Knox: 1. On the Knox node, login as root and go to this directory: /usr/hdp/current/knox-server/bin # cd /usr/hdp/current/knox-server/bin 2. Make a copy of the file that we are going to change next. # cp gateway.sh gateway.sh.backup.`date +%m%d%Y-%H-%M-%S` 3. Open gateway.sh in text editor and change this line: APP_MEM_OPTS=""
to this:
APP_MEM_OPTS="-Xmx5g -XX:NewSize=3G -XX:MaxNewSize=3G -verbose:gc -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -Xloggc:/var/log/knox/knox-gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
Shown above are the example values for Knox Heap. This should be updated based on user's environment. General formula for Knox Heap size will be: Recommended KNOX Heap Size =
(Webhdfs(ConcNumOfRequest*(replayBufferSize)) +
HBase(ConcNumOfRequest*(replayBufferSize)) +
Hive(ConcNumOfRequest*(replayBufferSize))) + 20 % for request surges where: ConcNumOfRequest = Number of concurrent requests expected for each component (WebHDFS, HBase, Hive etc.) replayBufferSize = Size of the largest incoming request to Knox Based on the Heap size value, user need to further tune the NewSize and MaxNewSize. 4. Save changes to gateway.sh and restart Knox via Ambari. Confirm the new settings via 'ps -ef | grep knox' command. # ps -ef| grep gateway
knox 29236 1 93 23:18 ? 00:00:29 /usr/jdk64/jdk1.8.0_77/bin/java -Xmx5g -XX:NewSize=3G -XX:MaxNewSize=3G -verbose:gc -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -Xloggc:/var/log/knox/knox-gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -jar /usr/hdp/current/knox-server/bin/gateway.jar
... View more
Labels: