Member since
09-05-2016
33
Posts
8
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1055 | 10-15-2019 09:05 PM | |
1849 | 08-30-2018 11:56 PM | |
5518 | 10-05-2016 07:07 AM | |
1378 | 09-29-2016 01:28 AM |
10-15-2019
09:05 PM
The problem was iptables on the data nodes. Once these were flushed the following command worked a treat. hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true hdfs://active_namenode:8020/tmp/test.txt swebhdfs://active_namenode:50470/tmp/test.txt Apologies for my confusion.
... View more
10-14-2019
04:25 PM
Hi,
I have an issue with distcp authentication between a kerberos secured cluster (HDP2.6.1.0-129) and an unsecured cluster. (HDP3.0.1.0-187)
The compatibility matrix for the versions says they should interoperate
I am running the following from the secure cluster:
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true swebhdfs://FQDN(secure cluster):50470/tmp/test.sh swebhdfs://FQDN(insecure cluster):50470/tmp/test.sh
Both ends have TLS enabled.I have a truststore configured and working.
I can see packets outbound from the client and arriving at the server using tcpdump
The error returned by the distcp command above is:
19/10/15 09:58:48 ERROR tools.DistCp: Exception encountered org.apache.hadoop.security.AccessControlException: Authentication required at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:460) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:114) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:750) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:592) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:622) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:618) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1524) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:333) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:557) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:578) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:852) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:745) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:592) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:622) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:618) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1004) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1020) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:252) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1696) at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77) at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86) at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:398) at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:190) at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155) at org.apache.hadoop.tools.DistCp.run(DistCp.java:128) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
The namenode log on the client shows successful authentication to kerberos.
The server hdfs namenode log shows the following warning:
WARN namenode.FSNamesystem (FSNamesystem.java:getDelegationToken(5611)) - trying to get DT with no secret manager running
Has anyone come across this issue before?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
10-02-2019
06:36 PM
1 Kudo
Hi @hbased; It is a bug. There is a jira for the issue and apparently it was resolved in version 3.0.0. This is the apache hbase site - not necessarily your distro version. Cloudera say the issue is resolved in 3.1.4. We have raised a support ticket with Cloudera and they are patching our current distro version of hbase and providing us with a new binary. Here's the jira ref. https://issues.apache.org/jira/browse/HBASE-21960 All the best
... View more
09-05-2019
11:06 PM
Hi @hbased I restarted my hbase rest servers - no joy. I have the hadoop.proxyuser.HTTP.<> settings in core site. I think I'll have a trawl through the hbase code and raise a jira. Good luck with the cell based auth. Thanks for your help! Cheers Andrew
... View more
09-05-2019
01:07 AM
Hi @hbased Did you get it working? Yeah, I was running the rest server as root with no Kerberos enabled. Curl requests through Knox were all run as root. Now with Kerberos enabled and running the rest server as hbase rather than root every request is run as HTTP. Which is my proxyuser. I have been restarting the rest server but will try again tomorrow as a sanity check. Cheers
... View more
09-04-2019
10:56 PM
I have enabled Kerberos and cannot get the proxying to function as expected. @hbased has a very similar issue here Ranger policies are working as expected - denying access to the proxyuser (HTTP) but the user impersonation simply doesn't work.
... View more
09-04-2019
10:40 PM
I'm having a very similar issue. I posted this on the 08-08-2019 https://community.cloudera.com/t5/Support-Questions/Ranger-hbase-plugin-not-proxying-users-Runs-every-request-as/m-p/237324 HDP 3.0.1 hbase 2.0.0.3.0.1.0-187 Knox is dispatching the request to hbase correctly. 19/09/05 14:39:31 ||c5231017-9998-458d-9336-98721bcb7cb2|audit|39.7.48.21|WEBHBASE|c8ary|||dispatch|uri|https://<namenode>:60080/footy/1?doAs=andrew|unavailable|Request method: GET 19/09/05 14:39:31 ||c5231017-9998-458d-9336-98721bcb7cb2|audit|39.7.48.21|WEBHBASE|c8ary|||dispatch|uri|https://<namenode>:60080/footy/1?doAs=andrew|success|Response status: 200 19/09/05 14:39:31 |||audit|39.7.48.21|WEBHBASE|andrew|||access|uri|/gateway/default/hbase/footy/1|success|Response status: 200 Every request is run as HTTP (which I have specified as the rest principal and rest authentication principal) hbase.rest.authentication.kerberos.keytab /etc/security/keytabs/spnego.service.keytab hbase.rest.authentication.kerberos.principal HTTP/_HOST@REALM hbase.rest.kerberos.principal HTTP/_HOST@REALM hbase.rest.keytab.file /etc/security/keytabs/spnego.service.keytab hbase-ranger-plugin is working. When I remove the HTTP user from the ranger policy access is denied. doAs functionality seems buggy to me also.
... View more
08-14-2019
01:27 AM
Thanks Josh, I'll enable kerberos and retry. I'll let you know how I go.
... View more
08-08-2019
12:28 AM
I am upgrading our cluster to HDP 3.0.1. I have installed ranger (1.1.0) and enabled the hbase plugin. (Other plugins are working as expected - hdfs, knox etc) All requests to hbase with _any_ user are being run as the root user. This is true through either knox or a direct call to hbase using curl. I do not have kerberos enabled as yet. Has anyone seen this before?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Ranger
11-29-2018
10:30 PM
@Matthew Shipton Hi Matthew, I have a similar requirement. ManifoldCF will do what you need as far as I know. It didn't seem to be a very active project however. Ranger (0.7.0) has a solr plug-in which provides access control at the collection level and I believe it also requires Kerberos to be enabled to work. For us this requirement means we need to implement a cross realm trust with our MS domains to get to the solr UI and it starts getting complex requiring the involvement of multiple external teams. The approach I am currently pursuing is a proxy to intercept the rest calls and insert a filter query param.The solr documents need to have an attribute with the values and you can filter on those. I'm not sure if this is what is meant by "hooks and bridges" I am looking at using Lucidworks Fusion at the moment. I'm not entirely certain but I think it is a commercialised version of ManifoldCF. If this doesn't work out I'll probably end implementing a wrapper for Solr that does what I need. If you come across any better solution please share. Hope this helps Cheers
... View more
10-29-2018
12:17 AM
@Mike Wong I had an issue similar when I enabled SSL on HDFS. Map Reduce and YARN. Symptoms: I could connect to HDFS rest interface from the command line internal to the cluster using curl I could run the UI for HDFS from internal to the cluster using xwindows enabled putty an XMing OpenSSL form internal returned 0 for the HDFS connection No external connection to the could be made using curl/openssl/telnet to HDFS The ResourceManager and JobHistory UI's both worked fine We had a firewall but even when disabled the connection to HDFS failed The issue was that we have 'internal network IP addresses (infiniband)' and externally accessible IP addresses. The HDFS when enabling https had bound to the internal address and wasn't listening on the external address - hence the connection refused. The solution was to add the
dfs.namenode.https-bind-host=0.0.0.0 property to get the service to listen across all network interfaces. Might be worth checking if you (or anybody else that gets a connection refused errors) that have multiple network interfaces, to which the port is binding to.
netstat -nlp | grep 50470
tcp 0 0 <internal IP address>:50470 0.0.0.0:* LISTEN 23521/java
netstat -nlp | grep 50070
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 23521/java
... View more
08-30-2018
11:56 PM
As far as I know there is no way around the issue except to get the example.com name added to the certificate as a subject alt name. TLS implementations have become more strictly enforced over the years. As anyone who has configured against NiFi will attest. Adding the subject alt name shouldn't be an issue except for the fact that the certificates are self signed instead of having a issuer chain. This means that all relying applications would need to update their trust stores with the new certificate/s. Your AD team should be using a chain rather than self signed because this will also cause problems when the self signed certificates expire.
... View more
01-08-2018
04:06 AM
Hi all, I'm am trying to configure the ranger-solr-plugin to work with knox authentication in the Sandbox 2.6.0. I'm using Solr 7.1, Knox 0.12.0 and Ranger 0.7 . Kerberos is enabled. The ranger-solr-plugin works fine with a direct connection (kerberos authentication) to solr using a cURL request. When I submit a cURL request I get a 401 "Authentication required" error. The Solr logs show that the credentials passed through by Knox are the basic auth credentials (that were passed to Knox) when Solr is expecting kerberos authentication. Any advice appreciated. solr logs: 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpChannel REQUEST for //sandbox.hortonworks.com:8443/solr/techproducts/query?q=* on HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=IDLE,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*} GET //sandbox.hortonworks.com:8443/solr/techproducts/query?q=* HTTP/1.1 X-Forwarded-For: 172.17.0.2 X-Forwarded-Proto: https X-Forwarded-Port: 8443 X-Forwarded-Host: sandbox.hortonworks.com:8443 X-Forwarded-Server: sandbox.hortonworks.com X-Forwarded-Context: /gateway/default Authorization: Basic dG9tOnRvbS1wYXNzd29yZA== User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2^M Host: sandbox.hortonworks.com:8443 Accept: */* Connection: keep-alive Accept-Encoding: gzip,deflate 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpChannel HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=IDLE,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*} onContentComplete 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpChannel HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=IDLE,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*} onRequestComplete 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpInput HttpInputOverHTTP@2e0a216d[c=0,q=1,[0]=EOF,s=STREAM] addContent EOF 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpConnection HttpConnection@380a76ec[SelectChannelEndPoint@689b879d{/172.17.0.2:50568<->9041,Open,in,out,-,-,1/120000,HttpConnection@380a76ec}{io=1/0,kio=1,kro=1}][p=HttpParser{s=END,0 of -1},g=HttpGenerator@c99a91f{s=START},c=HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=IDLE,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*}] parsed true HttpParser{s=END,0 of -1} 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpConnection releaseRequestBuffer HttpConnection@380a76ec[SelectChannelEndPoint@689b879d{/172.17.0.2:50568<->9041,Open,in,out,-,-,1/120000,HttpConnection@380a76ec}{io=1/0,kio=1,kro=1}][p=HttpParser{s=END,0 of -1},g=HttpGenerator@c99a91f{s=START},c=HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=IDLE,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*}] 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpChannel HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=IDLE,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*} handle //sandbox.hortonworks.com:8443/solr/techproducts/query?q=* 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpChannelState HttpChannelState@275bce0{s=IDLE a=NOT_ASYNC i=true r=NONE/false w=false} handling IDLE 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.HttpChannel HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=DISPATCHED,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*} action DISPATCH 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.Server REQUEST GET /solr/techproducts/query on HttpChannelOverHttp@32c7ec9f{r=2,c=false,a=DISPATCHED,uri=//sandbox.hortonworks.com:8443/solr/techproducts/query?q=*} 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.h.ContextHandler scope null||/solr/techproducts/query @ o.e.j.w.WebAppContext@7d70d1b1{/solr,file:///opt/solr-7.1.0/server/solr-webapp/webapp/,AVAILABLE}{/opt/solr-7.1.0/server/solr-webapp/webapp} 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.h.ContextHandler context=/solr||/techproducts/query @ o.e.j.w.WebAppContext@7d70d1b1{/solr,file:///opt/solr-7.1.0/server/solr-webapp/webapp/,AVAILABLE}{/opt/solr-7.1.0/server/solr-webapp/webapp} 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.session sessionManager=org.eclipse.jetty.server.session.HashSessionManager@2a556333 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.session session=null 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.ServletHandler servlet /solr|/techproducts/query|null -> default@5c13d641==org.eclipse.jetty.servlet.DefaultServlet,jsp=null,order=0,inst=true 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.ServletHandler chain=SolrRequestFilter->default@5c13d641==org.eclipse.jetty.servlet.DefaultServlet,jsp=null,order=0,inst=true 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.e.j.s.ServletHandler call filter SolrRequestFilter 2018-01-08 03:39:43.697 DEBUG (qtp42121758-16) [ ] o.a.h.s.a.s.AuthenticationFilter Request [http://sandbox.hortonworks.com:8443/solr/techproducts/query?q=*] triggering authentication 2018-01-08 03:39:43.697 WARN (qtp42121758-16) [ ] o.a.h.s.a.s.KerberosAuthenticationHandler 'Authorization' does not start with 'Negotiate' : Basic dG9tOnRvbS1wYXNzd29yZA== 2018-01-08 03:39:43.698 DEBUG (qtp42121758-16) [ ] o.e.j.s.ErrorPageErrorHandler getErrorPage(GET /solr/techproducts/query) => error_page=null (from global default)
... View more
Labels:
- Labels:
-
Apache Knox
-
Apache Ranger
-
Apache Solr
11-14-2016
01:29 AM
1 Kudo
Hi All, I have a question regarding the common name attribute of key stores when enabling SSL for HDFS,MapReduce and Yarn along with Ranger HDFS plugin. I have read articles that state that the common name needs to be identical across the cluster for the keystores for the xasecure.policymgr.clientssl.keystore key store. This key store seems to be responsible for protecting the Ranger console so when not set to the FQDN of the console server breaks the Ranger console. https://community.hortonworks.com/articles/16373/ranger-ssl-pitfalls.html If this configuration is set then the journal nodes complain about the common name in the certificate not being the same as the hostname. Additionally, the configuration item common.name.for.certificate in hdfs-site.xml and the "Common Name for Certificate" value in the ranger plug-in policies implies that a single common name is required in all keystores across the cluster. response={"httpStatusCode":400,"statusCode":1,"msgDesc":"Unauthorized access. No common name for certificate set. Please check your service config","messageList":[{"name":"OPER_NOT_ALLOWED_FOR_ENTITY","rbKey":"xa.error.oper_not_allowed_for_state","message":"Operation not allowed for entity"}]}, serviceName=<clusterName>_hadoop
If the key stores have the FQDN set then the name nodes throw an SSLException. javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching <hostname> found
I am probably missing something. Do I need two distinct sets of keystores? One with an identical common name and one set with the FQDN for each host? Or should I be using a common subject alt name in the certificates?
... View more
Labels:
- Labels:
-
Apache Ranger
10-05-2016
07:58 AM
Knox_sample also works now
... View more
10-05-2016
07:07 AM
@mrizvi I downloaded the 2.5 sandbox and got the same issue as you describe. The problem seems to be the directory for previous deployments can't be deleted and this causes the service for the topologies to fail to start. I eventually got mine working by moving all of the topology xml files out of /usr/hdp/current/knox-server/conf/topologies and restarting knox. It automatically populates the default, knoxsso and admin files back into the folder. I was able to list the files using the default topology. I moved the knox_sample.xml back into the folder and did another restart. It failed to start due to the temp folder being unable to be deleted. So I catted the knox_sample.xml into another file knox_sample2.xml and restarted again. I was able to list the files through knox_sample2. It is more of a work around than anything else. I don't know why the temp folder can't be deleted. I couldn't delete the folder manually and when I tried I got an invalid argument error rmdir /var/lib/knox/data-2.5.0.0-1245/deployments/knoxsso.topo.157239f6c28/%2Fknoxauth/META-INF/temp/jsp rmdir: failed to remove `jsp/': Invalid argument
... View more
09-30-2016
08:36 AM
It's a different error from the gateway logs in the attached log file for the link I posted. So it probably isn't your issue.
This command requires a running instance of Knox to be present on the same machine. It will execute a test to make sure all services are accessible through the gateway URLs. Errors are reported and suggestions to resolve any problems are returned. JSON formatted. I'm confused now. This implies that knox isn't running. But the results for the user auth test say that it is. I'm running 2.4 so I will try to setup 2.5 over the weekend for myself. Do you have ranger enabled with knox? Can you post the gateway.log error with debug enabled for when you make your curl request? Cheers
... View more
09-30-2016
08:03 AM
@mrizvi Quite alright. Maybe a long shot but I found this post with a similar issue to the one you are experiencing. service unavailable
... View more
09-30-2016
01:56 AM
Might be a good idea to check if you have any knox zombies running. They can hang on to files and prevent deletes. ps -ef | grep -i knox
... View more
09-30-2016
01:48 AM
@mrizvi I don't get that warning so it may be significant. I'm not sure. If I had to hazard a guess as to why it's failing to delete it's either a path or file permissions issue. Can you post the whole log? Also can you try a few tests with the /usr/hdp/current/knox-server.bin/knoxcli.sh utility? It might provide a bit more meaningful info. ./knoxcli.sh service-test --cluster knox_sample --u guest -p guest-password --hostname sandbox.hortonworks.com --port 8443 --cluster knox_sample ./knoxcli.sh user-auth-test --cluster knox_sample --u guest --p guest-password --hostname sandbox.hortonworks.com --port 8443 --cluster knox_sample
... View more
09-29-2016
11:42 PM
@mrizvi I would bump up the log levels for the gateway to DEBUG through Ambari as a next step. In the knox advanced tab for gateway log4j change log4j.rootLogger=ERROR, drfa to log4j.rootLogger=DEBUG, drfa Then submit a curl request check the gateway.log. There will be a lot of output so search for 'guest'.
... View more
09-29-2016
07:39 AM
@mrizvi Apologies, format issues I think. Try this one.knox-sample.xml
... View more
09-29-2016
04:05 AM
@mrizvi No worries. Can you contact the service without knox? curl -i -v -k -u guest:guest-password 'http://sandbox.hortonworks.com:50070/webhdfs/v1/?op=LISTSTATUS' I have attached a topology file that currently works on my sandbox for you to compare. Cheers
... View more
09-29-2016
01:28 AM
2 Kudos
This issue was resolved by updating the zookeeper jaas config to use a keytab rather than a ticket cache. The cache was expiring and the auth failing. I thing that I've learned is that the updated configs can take a while to propagate through the cluster. I had tried this config before but likely didn't wait long enough before discounting it as the solution. Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true
useTicketCache=false keyTab="/etc/security/keytabs/hbase.service.keytab" principal="hbase/<your_host>@<your_realm>";
};
... View more
09-28-2016
10:35 PM
@mrizvi A couple of things to check would be your knox_sample topology has the service mapping to webhdfs <service>
<role>WEBHDFS</role>
<url>http://sandbox:50070/webhdfs</url>
</service> and the webhdfs server is listening on that port netstat -nl | grep 50070 Take a look at your knox logs as well - they may have some extra info. /usr/hdp/current/knox-server/logs/gateway.log Cheers
... View more
09-23-2016
01:00 AM
1 Kudo
zookeeperlog-startup.txt Update: The above configuration now functions after a few service restarts. However I am now experiencing another issue. Requests to HBase rest now timeout after the specified number of client retries with a HTTP error of service unavailable. The Zookeeper logs indicate the failure to establish a quorum. I have attached a log excerpt of the log file. I have three nodes. Zookeeper is running on both other nodes according to Ambari. Has anyone come across this problem before? I have read some info that it may be related to DNS set up but have so far had no success.
2016-09-23 08:29:51,812 - DEBUG [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@609] - id: 3, proposed id: 3, zxid: 0x73000079ea, proposed zxid: 0x73000079ea 2016-09-23 08:29:51,814 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2016-09-23 08:29:51,814 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@366] - IOException stack trace java.io.IOException: ZooKeeperServer not running at org.apache.zookeeper.server.NIOServerCnxn.readLength(NIOServerCnxn.java:931) at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:237) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) 2016-09-23 08:29:51,816 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /34.45.6.3:34139 (no session established for client) 2016-09-23 08:29:51,816 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2016-09-23 08:29:51,816 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@366] - IOException stack trace java.io.IOException: ZooKeeperServer not running at org.apache.zookeeper.server.NIOServerCnxn.readLength(NIOServerCnxn.java:931) at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:237) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) 2016-09-23 08:29:51,816 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /34.45.6.2:60031 (no session established for client) 2016-09-23 08:29:51,817 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /34.45.6.2:60032 2016-09-23 08:29:51,817 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperSaslServer@78] - serviceHostname is 'namenode_1' 2016-09-23 08:29:51,817 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperSaslServer@79] - servicePrincipalName is 'zookeeper' 2016-09-23 08:29:51,817 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperSaslServer@80] - SASL mechanism(mech) is 'GSSAPI'
... View more
09-14-2016
02:11 AM
3 Kudos
Hi all, I am trying to configure HBase rest to allow proxying of users. I have the Ranger HBase plugin enabled to provide the HBase ACLs. I am able to retrieve resources successfully from the repository using curl commands however the "doAs" proxying doesn't appear to be working. The Ranger audit logs show all operations being performed by the proxy user HTTP rather than the impersonated user. I have configured the hbase-site.xml file with the following additional settings to support impersonation. hadoop.proxyuser.HTTP.groups: hadoop hadoop.proxyuser.HTTP.hosts: * hadoop.security.authorization: true hbase.rest.authentication.kerberos.keytab: /etc/security/keytabs/hbase.service.keytab hbase.rest.authentication.kerberos.principal: HTTP/_HOST@<REALM> hbase.rest.authentication.type: kerberos hbase.rest.kerberos.principal: hbase/_HOST@<REALM> hbase.rest.keytab.file: /etc/security/keytabs/hbase.service.keytab hbase.rest.support.proxyuser: true I have added an HTTP user to ranger and the user to an HBase policy giving 'RWCA' access and have added the same priveleges to HTTP on HBase: grant 'HTTP' 'RWCA'. I am using the following curl command to query HBase. curl -ivk --negotiate -u : -H "Content-Type: application/octet-stream" -X GET "http://<namenode>:60080/<resource>/234998/d:p?doAs=hadoopuser1" -o test4.jpg I was expecting that ranger would apply the ACLs of the user being impersonated by the proxy user to limit access and provide audit logging in the same way as the webhdfs plugin. Is this possible? If so am I missing something in my configuration? Any advice appreciated. Regards Andrew
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Ranger