Member since
04-09-2019
254
Posts
140
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2089 | 05-22-2018 08:32 PM | |
14282 | 03-15-2018 02:28 AM | |
3802 | 08-07-2017 07:23 PM | |
4596 | 07-27-2017 05:22 PM | |
2595 | 07-27-2017 05:16 PM |
10-03-2016
10:01 AM
1 Kudo
Hello @Avijeet Dash, Can you please change the value of 'Hive authentication' to this: auth=KERBEROS;principal=hive/_HOST@HCLBASIT.local;hive.server2.proxy.user=${username}
Save the changes and let us know if view is working now for users 'hr1', 'legal1' and 'mktg1'. Hope this helps.
... View more
10-03-2016
07:24 AM
Hello @Avijeet Dash, If Hive view is not working even for AD users, then we'd like to see a screenshot of the Hive view configuration from Ambari. Please attach the same and we'll review the configuration. Thanks.
... View more
10-01-2016
03:51 PM
1 Kudo
Hello @Avijeet Dash, Support for Kerberos in HDP 2.x is very much production ready. A lot of users are already running Kerberos in their production clusters. It also depends on which Hadoop components you are trying to use in production. Lets look at component level issues: > 1) Looks like a number of UIs (Atlas, Ambari-views etc.) need Browser to be set up for Kerberos This is basic requirement for any Kerberos enabled service. For access over HTTP, the service usually support SPNEGO. Hence SPNEGO needs to be enabled in browser. > 2) Zeppelin doesn't work with kerberos Zeppelin doesn't require any special configuration to work over Kerberos. It relies on underlying services (YARN etc.) to work with Kerberos. Moreover, Zeppelin was added as technical preview in HDP 2.4 so it was not meant to be used in production cluster. Hortonworks recommend to upgrade to HDP 2.5 if you want to use in production. > 3) There are issues around oozie - sqoop Oozie sqoop action with Kerberos is tested and working successfully. Most of the errors are usually the configuration issues. You can encouraged to post the issues here so that we can suggest solution. On a generic note, it is highly recommended to upgrade to HDP 2.5.0 so that you get the latest components version with loads of fixes & new features. Thank you.
... View more
10-01-2016
10:19 AM
Hi @Avijeet Dash Couple of things to question: 1. Is user based policy working for Hive view? 2. If only group policy is not working, then user's group membership resolution needs to be checked. Can Ranger & HiveServer2 (in that order) resolve that a user belongs to what all groups? 3. Hadoop services generally depends on operating system's ability to resolve user/group membership. Have you configured your system to resolve AD user/group information?
... View more
10-01-2016
10:13 AM
Hi @Avijeet Dash The browser can not do kinit, it can use a Kerberos ticket if available on the system. If it is Windows desktop, then it can automatically get a ticket from AD during user login. It it is non-Windows desktop, then the logged-in user need to acquire ticket by manually doing kinit.
... View more
09-26-2016
10:41 AM
Hello @Rahul Buragohain You need to check if you are getting all 15 groups in the ldapsearch command output. Also please share that ldpasearch command with the options. Your group search filter is going to filter all the records which have "cn" field, which will match to probably all records. You might want to try again after removing the group search filter. Also, please change the search base to "OU=Groups,DC=example,DC=com" (with the correct case). Not that it is going to change anything but just wanted to be on safe side. Hope this helps.
... View more
09-25-2016
07:26 PM
1 Kudo
Hello @Avijeet Dash Assuming you've enabled Kerberos for Atlas service and Atlas service is up & running, from the error message it looks like the client (browser) is missing a Kerberos ticket. The quickest way to check would be to go any cluster node and use curl to access the Atlas UI like this: kinit <username>
curl -i -u : --negotiate http://<ambari-host>:21000/#!/search?user.name=<username>; If everything works fine, this request should return '200 OK'. That would mean that your browser (and/or the node running browser) is not configured to perform Kerberos authentication. Then you'll need to follow this link to enable Kerberos support in browser. If above curl command doesn't return '200 OK', then we'll need to investigate that first. Hope this helps.
... View more
09-25-2016
07:14 PM
1 Kudo
Hello @Avijeet Dash, > How does ambari views work? Does it kinit for the user who has logged in?
Once Kerberos is enabled for Hadoop services, any client connecting to these services need to carry a Kerberos ticket (TGT). Since Ambari views are actually client, they'd also need a Kerberos ticket. And becuase Ambari don't (yet) accept Kerberos user login, there is no ticket available with the logged in user. Therefore, we need to "setup Ambari Server for Kerberos", so that Ambari server can acquire a Kerberos ticket upon startup. The Ambari views (not all views though) use this Kerberos ticket to connect to a Kerberized Hadoop service. > If we integrate Amabri with AD - will this problem be solved?
Yes, that's correct and will solve this issue. > How do we have multiple users kinit in same node?
The Kerberos ticket acquired by running kinit are stored in credential cache file named after user's UID (e.g. /tmp/krb5cc_501). Thus, by default, multiple users can login and have their own tickets without any conflict. Alternatively one can override the default credential cache location by exporting an environment variable KRB5_CCNAME=<path-to-cc-file>. In this case, the onus of conflict resolution will be upon system admin. Hope this helps.
... View more
09-25-2016
06:46 PM
Hello @Gagan Brahmi , > Does this look to be related to the Kerberos ticket lifetime?
If this is happening every time, exactly after 10 hours, then it does look related. But I'd rather confirm before concluding. > Is there a delegation token renewal for yarn? Or is it just the hdfs?
Yes, there is a delegation token renewal mechanism in YARN. As for the clues on why YARN is removing the application every 10 hours, I'd look into Resource Manager log for any warning & error around the application ID (specially around 10 hour mark from job submission). Also what does application log say about this? Any error / warning?
... View more
09-25-2016
10:27 AM
1 Kudo
Hi @Peter Coates HDFS does support heterogeneous storage types but specifying your own storage type is not supported. You need to use one from pre-defined types (ARCHIVE, DISK, SSD and RAM_DISK). Each storage type comes with its own policy (which affects the way creation & replicas will be handled). So if you can differentiate between your encrypted and non-encrypted volume based on these storage types, then only can control where HDFS writes a file. Hope this helps. Reference:
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
... View more