Member since
09-24-2015
49
Posts
67
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5349 | 03-29-2016 03:02 PM | |
3038 | 03-21-2016 01:34 PM | |
3506 | 03-07-2016 09:12 PM | |
2973 | 01-12-2016 10:01 PM | |
1081 | 01-11-2016 10:04 PM |
04-11-2016
03:23 PM
2 Kudos
It is technically possible to have more than one authentication provider in a given topology but the result is unlikely to be what is expected. The first reason might be that all but one of them is enabled=false so that there is in effect only one. The other possibility is that a given custom service in a topology requires a specific authentication provider implementation. In this case the first enabled authentication provider in the topology would be the default and the custom service would identify a specific authentication provide by role and name in its service.xml file.
... View more
04-05-2016
02:27 PM
and /var/log/knox/gateway.out if it isn't empty while you are collecting things.
... View more
03-29-2016
03:02 PM
2 Kudos
The Apache Knox User's Guide contains a section (http://knox.apache.org/books/knox-0-6-0/user-guide.html#High+Availability) on using the Apache HTTP server as a load balancer for Knox. Someone familiar with another load balancer should be able to extrapolate the information provided in the Apache Knox User's Guide to setup that product.
... View more
03-22-2016
04:51 PM
@Sushil Kumar < If the answers below don't address your questions can you be more specific?
For the "This command bin/gateway.sh must not be run as root." issue you should either not be logged in as root or you should execute the command as a different user via sudo (e.g. su -l knox -c "bin/gateway.sh start")
For the "integration with open Apache Hadoop" you just need to modify the topology files (e.g. conf/topologies/sandbox.xml) to match the network locations for your installation.
... View more
03-21-2016
01:34 PM
1 Kudo
The LDAP configuration goes in the topology files. These are located in <GATEWAY_HOME>/conf/topologies. Out of the box there is a topology file named sandbox.xml with some sample configuration. That would need to be modified to integrate with a different LDAP or ActiveDirectory. The ldap.jar that you show running is a demo LDAP server that is provided with Knox that is not intended for production environments. If you haven't seen the Knox User's Guide you should check that out: http://knox.apache.org/books/knox-0-8-0/user-guide.html for more information about configuring things.
... View more
03-07-2016
09:12 PM
Can you confirm you have a topology file named knox_sample in your {GATEWAY_HOME}/conf/topologies directory? You should also check to make sure that it is valid XML. If it is malformed the topology will not deploy and you will get 404s.
... View more
02-29-2016
07:26 PM
2 Kudos
Sorry it took me a while to response here but I was putting together a working sample. The first important point is that I think people tend to overestimate the complexity of dealing with the REST APIs, especially WebHDFS. The point of having REST APIs after all is supposed to be very thin clients. I played with a few different Java HTTP client libraries and to my surprise the venerable Java HttpsUrlConnection resulted in the cleanest examples. The Apache HttpClient is certainly an option and might be warranted in more complex situations.
IMPORTANT: Before you continue however please note that these examples are setup to circumvent both SSL hostname and certificate validation. This is not acceptable in production but often helps in samples to make sure they don't become a barrier to success.
I'll show the heart of the solution below but the full answer can be found here: https://github.com/kminder/knox-webhdfs-client-examples. Specifically here: https://github.com/kminder/knox-webhdfs-client-examples/blob/master/src/test/java/net/minder/KnoxWebHdfsJavaClientExamplesTest.java
Now for the code. The first is an example of the simplest of operations: GETHOMEDIRECTORY.
@Test
public void getHomeDirExample() throws Exception {
HttpsURLConnection connection;
InputStream input;
JsonNode json;
connection = createHttpUrlConnection( WEBHDFS_URL + "?op=GETHOMEDIRECTORY" );
input = connection.getInputStream();
json = MAPPER.readTree( input );
input.close();
connection.disconnect();
assertThat( json.get( "Path" ).asText(), is( "/user/"+TEST_USERNAME ) );
}
Next a more complicated sample that writes and reads a file to HDFS via the CREATE and OPEN operations.
@Test
public void putGetFileExample() throws Exception {
HttpsURLConnection connection;
String redirect;
InputStream input;
OutputStream output;
String data = UUID.randomUUID().toString();
connection = createHttpUrlConnection( WEBHDFS_URL + "/tmp/" + data + "/?op=CREATE" );
connection.setRequestMethod( "PUT" );
assertThat( connection.getResponseCode(), is(307) );
redirect = connection.getHeaderField( "Location" );
connection.disconnect();
connection = createHttpUrlConnection( redirect );
connection.setRequestMethod( "PUT" );
connection.setDoOutput( true );
output = connection.getOutputStream();
IOUtils.write( data.getBytes(), output );
output.close();
connection.disconnect();
assertThat( connection.getResponseCode(), is(201) );
connection = createHttpUrlConnection( WEBHDFS_URL + "/tmp/" + data + "/?op=OPEN" );
assertThat( connection.getResponseCode(), is(307) );
redirect = connection.getHeaderField( "Location" );
connection.disconnect();
connection = createHttpUrlConnection( redirect );
input = connection.getInputStream();
assertThat( IOUtils.toString( input ), is( data ) );
input.close();
connection.disconnect();
}
Now of course you have probably noticed that all of the "magic" is hidden in that createHttpUrlConnection method. Not really magic at all but this is where the "un-securing" of SSL happens. This also takes care of setting up HTTP BasicAuth for authentication and disables redirects which should be done when using the WebHDFS REST APIs. private HttpsURLConnection createHttpUrlConnection( URL url ) throws Exception {
HttpsURLConnection conn = (HttpsURLConnection)url.openConnection();
conn.setHostnameVerifier( new TrustAllHosts() );
conn.setSSLSocketFactory( TrustAllCerts.createInsecureSslContext().getSocketFactory() );
conn.setInstanceFollowRedirects( false );
String credentials = TEST_USERNAME + ":" + TEST_PASSWORD;
conn.setRequestProperty( "Authorization", "Basic " + DatatypeConverter.printBase64Binary(credentials.getBytes() ) );
return conn;
}
private HttpsURLConnection createHttpUrlConnection( String url ) throws Exception {
return createHttpUrlConnection( new URL( url ) );
}
... View more
02-16-2016
02:49 PM
1 Kudo
I didn't understand that beeline was working via Knox already. A few questions then: What application is making the HS2 call via Knox? Is the application using JDBC or ODBC drivers and what version? What does your JDBC connect string look like (without real hostname or passwords of course)?
... View more
02-12-2016
02:23 PM
2 Kudos
Looks like you are missing hadoop.proxyuser.knox.groups=users
hadoop.proxyuser.knox.hosts=* Also note that you should probably not have hadoop.proxyuser.guest.groups=users
hadoop.proxyuser.guest.hosts=* as this is essentially saying that the 'guest' user is allowed to impersonate anyone in the 'users' group. Beyond that you need to ensure that your user 'adpqa' is in group 'users'.
... View more
01-26-2016
08:41 PM
2 Kudos
The potentially confusing process of adding a partition to Apache Directory Studio is the reason we decided to include the pre-populated Demo LDAP server with Knox instead of just instructions for using ADS. To do this in ADS you need to switch to the "Servers" tab in the lower right and click on Local. Then in the Partitions view on the left press "Add..." and provide the Suffix: value for example dc=custom,dc=sample,dc=com. Set ID: to something unique. Then you should be able to add subentries to that partition and you would no longer use the Knox Demo LDAP server. Keep in mind that the port is typically 10389 instead of the 33389 used by the Knox Demo LDAP. See the "General" view tab when the Local server is selected for details. You can import a LDIF using the File>Import menu item. Select LDAP Browser>DIF into LDAP. Browse for your LDIF file and Import into Local. Make sure you check "Overwrite existing log..." if you have to repeat the process. One confusing part here is that there needs to be an entry in your LDIF file for the Suffix: entered above. For example if you are trying to import the users.ldif that comes with Knox the Suffix: you would use is dc=hadoop,dc=apache,dc=org because this is the root object in that users.ldif.
... View more