Member since
04-09-2019
254
Posts
139
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
800 | 05-22-2018 08:32 PM | |
5359 | 03-15-2018 02:28 AM | |
1445 | 08-07-2017 07:23 PM | |
1836 | 07-27-2017 05:22 PM | |
1032 | 07-27-2017 05:16 PM |
03-19-2022
10:16 PM
Thank you @adhishankarit for sharing this with us. This will be useful for other person as well. Many reader will get benefit from this.
... View more
03-10-2022
11:08 PM
Hello @RajeshReddy , DataSteward role would usually grant “environments/adminRanger” permission which makes user Ranger and Atlas admin. This would suffice to create a tag based policy. Can we get more info on the error you are getting? Any screenshot or error messages would help us greatly to help you further. Thanks.
... View more
03-10-2022
12:18 AM
Hello, In order to assist with this, I'd need to check more python code which you are running. Could you please share the snippet around the POST call? Also, I noticed the RangerPDPKnoxFilter in the stack trace. This means that your Knox topology (cdp-proxy-api) has Ranger Knox plugin enabled for authorization. Can you please disable the plugin (only for testing) and try again? Hope this helps. Thanks.
... View more
07-30-2019
08:46 PM
Hello @Koffi, Based on your connection string and problem description, looks like you are not using the right principal in the Beeline connection string: shell> beeline -u "jdbc:hive2://<hostname>:10000/default;principal=mapr/<FQDN@REALM>" Please use the following: shell> beeline -u "jdbc:hive2://<hostname>:10000/default;principal=hive/<FQDN-of-HS2>@REALM>" Hope this helps!
... View more
07-30-2019
08:43 PM
Hello @Rajesh Reddy, If you have two KDC servers in the same realm and with the same domain name, then you don't really need to setup any trust between them. Ticket given by one KDC will be automatically honored in the other KDC. Hope this helps!
... View more
07-29-2019
01:14 AM
Hello @shraddha srivastav, You seem to be using a slightly different dependency package. As per Zeppelin 0.7 doc (https://zeppelin.apache.org/docs/0.7.0/interpreter/jdbc.html#redshift) page, it is tested with dependency "com.amazonaws:aws-java-sdk-redshift:1.11.51". Would you mind following the doc and setting your property exactly as it is shown? Thanks, hope this helps!
... View more
07-29-2019
01:08 AM
Hello @Rekha Goverthanam, Are you using GitNotebookRepo to store your notebooks?
... View more
06-05-2019
08:19 PM
Hello @Aditya Jadhav, HDP is not tested with Hydra for Knox SSO, but the general understanding is - As long as Hydra supports SAML-based / OIDC authentication, you can make it work with Knox SSO. Please have a look at this article which talks about making OIDC work with Knox: https://community.hortonworks.com/articles/171892/configure-knox-with-openid-connect.html Hope this helps!
... View more
05-29-2019
03:09 AM
Hello @Vidya Sagar S, I think I see your problem. It's with the key of HTTP/RANGER_FQDN principal. Most probably, the key of Ranger SPNEGO principal (HTTP/RANGER_FQDN) has been changed/updated in Kerberos database but not in the spnego.service.keytab on Ranger node. Hence "checksum failed" error. To confirm this, please get the output of these commands on Ranger host: # klist -kt /etc/security/keytabs/spnego.service.keytab
# kinit <any-working-principal>
# kvno HTTP/<RANGER_FQDN> If the Key Version Number (kvno) in the first command output and last command output doesn't match, then that's the issue. To fix this, get the key of HTTP/RANGER_FQDN out in a new keytab and replace spnego.service.keytab on Ranger host. Restart Ranger admin from Ambari. Hope this helps!
... View more
05-24-2019
08:49 PM
Hello @Rohan N, This is a known issue with HDP 2.5.3. Please try with HDP 3.1 and let us know if you still this issue. If the issue persist, you might want to open a Hortonworks Support case and/or Knox bug. Thanks.
... View more
05-24-2019
06:21 PM
Hi @Artur Brandys, This error is due to missing trust path to cert of other party. Make sure that you have also imported Knox cert into Ambari truststore. Hope this helps!
... View more
05-22-2019
05:37 PM
This question is too generic to get an answer. Please add some more info like which service you are using, what is your use case, what Kerberos login you are setting up and where/why do you need this 'login context'? Thanks.
... View more
05-22-2019
05:35 PM
Hello @Ranjandas Athiyanathum Poyil, Ranger KMS Master key is used to encrypt the EZK (Encryption Zone Key). This can be stored in either Ranger DB or in HSM(Hardware Security Module). This diagram (although it is in context of HSM) will help you understand the flow of information. Hope this helps!
... View more
09-17-2018
06:41 PM
Hello @Kant T, > my question here is how to sync AD users with hdp clusters. The best way to achieve this is to use SSSD. In this, you'll need to make your cluster nodes part of AD domain and then the nodes will be able to see the AD users and groups. Please follow the instructions here: https://github.com/HortonworksUniversity/Security_Labs/blob/master/HDP-2.6-AD.md#setup-ados-integration-via-sssd I'll highly recommend to go over this document from the beginning. Hope this helps.
... View more
08-20-2018
08:03 PM
+1 for detailed answer !
... View more
08-01-2018
08:41 PM
@Bhushan Kandalkar, Can you login to Zeppelin as 'bhushan-kandalkar' instead of 'bhushan-kandalkar@test.com'? You may need to set "activeDirectoryRealm.principalSuffix = @test.com" if you are using "org.apache.zeppelin.realm.ActiveDirectoryGroupRealm". With this set, you should be able to login as 'bhushan-kandalkar' and same would appear in notebook permission. Hope this helps.
... View more
07-31-2018
08:25 PM
@Bhushan Kandalkar, Please let us know the value of "zeppelin.notebook.storage" property in Zeppelin. If you can attach your zeppelin-site.xml from Zeppelin node ("after scrubbing your env. specific details"), that will be even better. What Felix is suggesting here, may actually work if done correctly.
... View more
06-28-2018
06:18 PM
Hello@Sami Ahmad, CAUTION: Retrieving the keytab resets the secret for the Kerberos principal. This renders all other keytabs for that principal invalid. The correct answer depends on which Kerberos implementation you are using. For MIT KDC, a system admin would use an interface, known as "kadmin" (or an alternative "kadmin.local"), to create keytab for users using 'ktadd' command. ktadd will regenerate key with a random password and then add it to keytab: # ktadd -k </path/to/file.keytab> <principal-name> For FreeIPA, an admin would use ipa-getkeytab command. This command can generate keytab with a random or provided password: # ipa-getkeytab -s <ipaserver.example.com> -p <principal-name> -k </path/to/file.keytab> For Microsoft AD, an admin should use ktpass command. This command is really useful when you want to generate a keytab for AD service principal to be used in Linux environment. This can also use a given password or a random password (+rndpass): # ktpass /princ hive/sandbox.hortonworks.com@HWX.COM /pass <password> /mapuser hiveservice /pType KRB5_NT_PRINCIPAL /crypto ALL /out c:\temp\hive.service.keytab Hope this helps!
... View more
06-15-2018
09:32 PM
Hello @Tom Morris, There could be some issue with Ambari server while requesting Kerberos information. To confirm that, please run the following command in a terminal on Ambari server. # curl -H "Content-Type: application/text" -H 'X-Requested-By: ambari' -u admin:<password> -X GET "http://<ambari-server-host-fqdn>:8080/api/v1/clusters/<cluster-name>/kerberos_identities?fields=*&format=csv" This will tell us whether the REST interface to download the CSV is working or not. Please share the results with us. Hope this helps!
... View more
06-05-2018
06:21 PM
@Karthik Palanisamy, This is really a good piece of information. Thanks for sharing. Keep them coming !
... View more
05-24-2018
11:51 PM
Hello @Xubin Chen, I see that you are on Metron 0.4.1, can you please try installing Metron 0.4.2 (available with HCP 1.4.2)? Also, which Hortonworks CyberSecurity Platform version you are using currently? Thanks.
... View more
05-23-2018
05:47 PM
Hello @Xubin Chen, To reinstall Metron Rest RPMs, you can use "yum reinstall <pkg-name>" command. To find the right package name for Metron Rest, you can run "rpm -qa | grep metron" command and look for Rest package in the output. Hope this helps!
... View more
05-22-2018
08:32 PM
Hello @Satya Nittala, Atlas Type Model is a way to define metadata about the kind of data that you want to manage. Actually, Atlas Type System encompasses 'Types' and 'Entities'. It is very much possible to create custom type for your dataset. Please refer to this article: https://community.hortonworks.com/articles/91237/creating-custom-types-and-entities-in-atlas.html Hope this helps!
... View more
05-22-2018
08:25 PM
1 Kudo
Hello @Praveen Pusarla, Looks like you are using a wrong class name "ShellGroupssMapping" for "hadoop.security.group.mapping" property in your core-site.xml file. By default, it is set to "org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback". Other possible values are: org.apache.hadoop.security.ShellBasedUnixGroupsMapping
org.apache.hadoop.security.LdapGroupsMapping If you can find this property and update the same in core-site.xml, you will not see the above error. Full Apache Hadoop documentation (Hadoop v2.8.0 though) can be found here. Hope this helps!
... View more
05-22-2018
08:12 PM
Hello @Xubin Chen, This could be due to bad Solr jar used in Metron Rest service. (Since it is not clear in question which Metron version you are using) This can be fixed by either reinstalling Metro Rest bits with the latest ones from repo OR by replacing the Solr jar with the good copy from the latest Metron repo. (I'd prefer the first option). Hope this helps!
... View more
05-22-2018
07:57 PM
Hello @sudi ts, In order to support Ranger authorization, the host service (like NameNode, HiveServer2 and in your case GCS) need to implement Ranger APIs (in the form of a Ranger plugin). I don't see support for Ranger in Google Cloud Storage documentation. Therefore, the answer is No. Similarly for CloudSQL, I could not find Ranger support in CloudSQL documentation. Although they do talk about Project level access control and instance/database level access control, but not quite via Ranger. Hope this helps!
... View more
05-22-2018
07:41 PM
Hello @Sriram, Zeppelin can not (& should not) disable user login upon multiple unsuccessful attempts. It is the duty of underlying authentication service (AD or LDAP) to do so. Organizations usually define this in login policy (like password policy, account lockout policy etc.) at the authentication service. Zeppelin, like any other service, just reacts to these policies. Hope this helps!
... View more
05-15-2018
10:21 PM
@karim farhane, ZEPPELIN-2796 is included in HDP version 2.6.3 onwards. FYI.
... View more
05-09-2018
04:25 PM
Hello @Bhushan Kandalkar, At this point, I'd enable debug for Beeline and check where exactly it is failing. Also, I'm surprised to see that both HS2 are not showing any sign of error whereas Beeline is showing '500 internal server error'. I hope you have checked both the HS2 logs. Anyways, Beeline debug should tell us more. Hope this helps! UPDATE: I looked at it again and that '500 internal server error' is actually from Knox and due to this line: 2018-05-0808:32:12,767 ERROR hadoop.gateway (AbstractGatewayFilter.java:doFilter(63))-Failed to execute filter: java.io.IOException:Service connectivity error. This tells me that Knox is not able to connect your authentication server (defined in topology). So instead of debug in Beeline, I'd enable debug in Knox to know more. Also, are you able to make an HDFS call via Knox using the same topology (just to verify topology configuration).
... View more
05-04-2018
06:46 PM
Hello @Abhilash Chandra, Currently there is no such option to store Ranger audit log directly in S3 instead of HDFS. As of now, you'll need to do in two steps i.e. first let the audit get collected in HDFS, then move them to S3 via a periodic job or some S3 connector of your choice. Hope this helps.
... View more