Member since
09-24-2015
49
Posts
67
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5347 | 03-29-2016 03:02 PM | |
3034 | 03-21-2016 01:34 PM | |
3505 | 03-07-2016 09:12 PM | |
2967 | 01-12-2016 10:01 PM | |
1078 | 01-11-2016 10:04 PM |
01-18-2016
09:38 PM
1 Kudo
Have you read the "Configuring SSL Verification" section (pg 19) of the Hortonworks Hive
ODBC Driver with SQL
Connector
User Guide. At a minimum you will need to "Enable SSL" and "Allow Self-signed Server
Certificate". You may possibly require "Allow Common Name Host Name Mismatch" if you have DNS setup issues at your site. I'm not totally sure if you need a PEM file if you are using Self-signed certs. Try the above first. http://hortonworks.com/wp-content/uploads/2015/10/Hortonworks-Hive-ODBC-Driver-User-Guide.pdf
... View more
01-14-2016
02:39 PM
3 Kudos
I was able to set this up using
Authentication Method: Simple Authentication Bind DN or user: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org This of course assumes you haven't changed the users.ldif file. I'm guessing you are trying to use your real domain but haven't updated the user.ldif file to reflect that. For reference here is the entry for the admin user in the default demo users.ldif file. dn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: Admin
sn: Admin
uid: admin
userPassword:xxxxxxxxxxxxxx
... View more
01-12-2016
10:03 PM
Any chance you can capture/provide the gateway.log file content from the period of time during which this occurred?
... View more
01-12-2016
10:01 PM
5 Kudos
Currently Knox does not currently support proxying the Ranger UI. If/when Knox does support proxying the Ranger UI you are correct that it may be impossible to access the Ranger UI via Knox if the Range/Knox agent is installed and if the required users have not already been granted access. Presumably setting up the required policies would be done before hand or from "within" the cluster and not via Knox.
... View more
01-11-2016
10:04 PM
2 Kudos
The answer is going to depend on exactly what you are looking for. Knox has a general purpose WebAppSercurityProvider that currently supports layering Cross Site Request Forgery (CSRF) protection onto any of the REST APIs Knox currently supports. The WebAppSecurityProvider is also extensible to so that support for other common WebApp vulnerabilities could be developed and plugged in. Knox does not currently have any support for layering injection vulnerability protection to the supported REST APIs. This is possible for some services given the architecture but would require a much tighter coupling between Knox and those services than would be ideal. Can you please clarify what you mean by "broken authentication" before I tackle that one?
... View more
01-04-2016
06:16 PM
1 Kudo
Thinking about running it "within Hadoop" may be the wrong way to think about it. From a Knox perspective "within Hadoop" is usually discussed from a security perspective. This means that everything "within Hadoop" is protected by the same firewall setup, same Kerberos configuration, etc. It is really just a collection of hosts dedicated to the various components that make up the solution. So in your case the Tomcat hosted API could simply run on one of the hosts that is part of the infrastructure dedicated to Hadoop. This API would be accessed via Knox which would be running on one of the hosts considered to be at the "permitter" from a network security perspective. All of the above being said, it is actually possible to run your Jetty or Tomcat hosted API "on Hadoop" via Slider. In this case the life-cycle of your API server would be managed by Hadoop. This would present some challenges from a Knox perspective as the Hadoop resource manager YARN may run your API server on any compute node in the Hadoop cluster making the Knox configuration challenging.
... View more
12-23-2015
03:03 PM
1 Kudo
A primary benefit of using Knox is that it insulates the clients from needing to be aware of Kerberos. However, if the HDP cluster is configured with Kerberos then Knox will need to be configured to interact with the cluster securely via Kerberos with the cluster. The clients however will be unaffected.
... View more
12-23-2015
02:59 PM
1 Kudo
Unfortunately HDP 2.2 was not certified with jdk1.8 and Knox 0.5.0.2.2 in particular has an issue with a keystore API change in jdk 1.8 that prevents it from starting. The only solution is to either upgrade HDP or downgrade the jdk.
... View more
12-22-2015
02:54 PM
2 Kudos
By default Knox has special behavior for Hadoop services that use the Hadoop Auth module.
https://hadoop.apache.org/docs/stable/hadoop-auth/... So yes it adds the user.name query parameter by default. I'm curios as to why {$username} isn't working for you though. What version of Knox are you using?
... View more
12-21-2015
04:02 PM
3 Kudos
In your rewrite.xml you can use a rewrite function to retrieve the current effective username. You can see an example of this in WebHDFS.
{code}
<rule dir="IN" name="WEBHDFS/webhdfs/inbound/namenode/home/file" pattern="*://*:*/**/webhdfs/{version}/~/{path=**}?{**}">
<rewrite template="{$serviceUrl[WEBHDFS]}/{version}/user/{$username}/{path=**}?{**}"/>
</rule>
{code} However password is a different matter. There are several issues with this. Depending upon the authorization provider there may be no password. The general Knox model is to protect the password not to make it easy to access. So from this perspective perhaps we need to understand your use case a bit better to determine if there is a different way to accomplish your goals. Without more information I'm guessing you actually need a trusted proxy model where your target service needs to trust that Knox has pre-authenticated the user and therefore only the username is required.
... View more