Member since
04-09-2019
254
Posts
140
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1536 | 05-22-2018 08:32 PM | |
10673 | 03-15-2018 02:28 AM | |
2836 | 08-07-2017 07:23 PM | |
3700 | 07-27-2017 05:22 PM | |
2002 | 07-27-2017 05:16 PM |
05-15-2018
10:21 PM
@karim farhane, ZEPPELIN-2796 is included in HDP version 2.6.3 onwards. FYI.
... View more
05-09-2018
04:25 PM
Hello @Bhushan Kandalkar, At this point, I'd enable debug for Beeline and check where exactly it is failing. Also, I'm surprised to see that both HS2 are not showing any sign of error whereas Beeline is showing '500 internal server error'. I hope you have checked both the HS2 logs. Anyways, Beeline debug should tell us more. Hope this helps! UPDATE: I looked at it again and that '500 internal server error' is actually from Knox and due to this line: 2018-05-0808:32:12,767 ERROR hadoop.gateway (AbstractGatewayFilter.java:doFilter(63))-Failed to execute filter: java.io.IOException:Service connectivity error. This tells me that Knox is not able to connect your authentication server (defined in topology). So instead of debug in Beeline, I'd enable debug in Knox to know more. Also, are you able to make an HDFS call via Knox using the same topology (just to verify topology configuration).
... View more
03-28-2018
10:50 PM
I stumbled on this today. Not sure if you are still looking for answers but here we go... Btw, thanks @Ravindra Punuru for debug output. From the debug output, the Kerberos layer is not able to decipher hive/_host@REALM principal name into correct principal name. Hence the error "Server not found in Kerberos database". Please try replacing the _HOST with FQDN of HiveServer2 node. Thanks !
... View more
03-15-2018
09:45 PM
That's correct @GN_Exp. If you want to do SLA in Knox via Ranger plugin then you'd need kerberos too.
... View more
03-15-2018
02:28 AM
Hello @GN_Exp, There are couple of things here: 1. From your gateway.log (dt:03/12), it looks like the Knox Gateway is trying to initialize RangerPDPKnoxFilter in the Gateway request filter and failing while doing so. This RangerPDPKnoxFilter is used when Kerberos is configured. Since you don't have Kerberos configured, you should not be using this. 2. To enable the Ranger plugin in Knox gateway service, you do not always need XAsecurePDPKnox as authorization provider. "AclsAuthz" would do just fine. This is usually used for Service Level Authorization in Knox topology, which you don't need for Ranger plugin test connection. Therefore, please stick to "AclsAuthz" unless you have any other use-case. If you still have problem with Knox service repo in Ranger, please attach the screenshot of Knox repo configuration from Ranger UI and screenshot of the error (if any). Hope this helps !
... View more
03-13-2018
08:52 PM
Hello @Jinyu Li, For Ranger hdfs-agent debug, please change log4j for NameNode and add this: log4j.logger.org.apache.ranger=DEBUG Your debug log messages will appear in NameNode log. Hope this helps !
... View more
03-13-2018
08:36 PM
Hello @Richard Grossman, Please follow this article to enable debug for Beeline. That might help you. Please paste the relevant debug log here if you want us to have a look. Hope this helps!
... View more
02-26-2018
08:11 PM
Hello @L James, First of all, having Ranger Admin service up & running is not a hard requirement. Meaning, if the Hadoop services (NameNode, YARN ResourceManager etc.) have sync'ed the policy information once, they'll continue to use that even if Ranger Admin service is down. So, no direct impact on running the Hadoop services. What you'll miss is - Any new update to a policy will not be sync'ed as Ranger plugin won't be able to communicate Ranger Admin. Come to think of it, even any update to a policy won't be possible if Ranger Admin is down. So this is not a problem either. Some power HDP users use load balancer (haproxy / F5 etc.) in front of two Ranger Services and that works just fine. In this case, each plugin will get policy from load balancer URL instead of individual Ranger Admin URL via ranger.plugin.hdfs.policy.rest.url property. > Even if i construct ranger admin load balancer to another server, all services of using ranger plugin need to restart. Based on my above description, this is not true. In the situation when one Ranger Admin is down and load balancer is pointing to another Ranger Admin, the plugins need not restart as long as they are configured to load balancer URL. Hope this helps !
... View more
02-15-2018
08:10 PM
5 Kudos
Motivation: When Hadoop components (HDFS et. al.) are configured to connect to external sources like LDAP, the LDAP bind passwords need to be given in configuration file (core-site.xml) in clear text. For many enterprise environments, having password in clear text is not allowed and is often flagged as risk in Security Audits. This article teaches Hadoop administrators on how to secure these plaintext password in Hadoop configuration. Configuration Steps: 1. Before starting, make sure that LDAP bind is working with plain text password. So, the HDFS should be configured with these settings in core-site.xml (the following values should be changed to match your LDAP/AD environment. These are working example values from my AD setup) hadoop.security.group.mapping=org.apache.hadoop.security.LdapGroupsMapping
hadoop.security.group.mapping.ldap.base=ou=CorpUsers,dc=lab,dc=hortonworks,dc=net
hadoop.security.group.mapping.ldap.bind.user=cn=ldap-reader,ou=ServiceUsers,dc=lab,dc=hortonworks,dc=net
hadoop.security.group.mapping.ldap.bind.password=s0mePassw0rd
hadoop.security.group.mapping.ldap.search.attr.group.name=cn
hadoop.security.group.mapping.ldap.search.attr.member=member
hadoop.security.group.mapping.ldap.search.filter.group=(objectclass=group)
hadoop.security.group.mapping.ldap.search.filter.user=(objectcategory=person)
hadoop.security.group.mapping.ldap.url=ldap://myad.lab.hortonworks.net:389 Notice that the LDAP bind password is in clear text. 2. Also at this point, HDFS should be able to resolve LDAP group(s) for an LDAP user. To check, use this command - hdfs groups <username>. For example, # hdfs groups hr1
hr1 : hadoop-users hadoop-admins HDP Ranger Admins With this basic setup, we are ready to secure our plaintext password. 3. Hadoop offers Credential Provider APIs which can be used to secure various passwords (not just LDAP bind password) in secure JCEKS (Java Cryptography Extension KeyStore) files. We will use the same in this article. 4. First of all, create a JCEKS file using hadoop credential command to store property name & bind password: # hadoop credential create hadoop.security.group.mapping.ldap.bind.password -value s0mePassw0rd -provider jceks://file/etc/security/bind.jceks
hadoop.security.group.mapping.ldap.bind.password has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated. This command creates a /etc/security/bind.jceks file with encrypted password & default permission of 700. 5. Update file permission of /etc/security/bind.jceks to 755 for root user. # chmod 755 /etc/security/bind.jceks
# ls -l /etc/security/bind.jceks
-rwxr-xr-x. 1 root root 533 Feb 15 20:00 /etc/security/bind.jceks 6. Let's use this credential provider in Hadoop configuration (core-site.xml): hadoop.security.credential.provider.path=localjceks://file/etc/security/bind.jceks and remove hadoop.security.group.mapping.ldap.bind.password property as well. 7. Restart HDFS NameNode service to load new property. 8. Verify that the LDAP groups are still able to resolve for an LDAP user.
... View more
Labels:
01-29-2018
08:25 PM
Hello @Mohamed Ismail Peer, Can you please try this JDBC command string : jdbc:hive2://node1:2181,node2:2181,node3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@<realm> Let us know the Kerberos debug output.
... View more