Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hbase rest proxying

avatar
Rising Star

Hi all,

I am trying to configure HBase rest to allow proxying of users. I have the Ranger HBase plugin enabled to provide the HBase ACLs.

I am able to retrieve resources successfully from the repository using curl commands however the "doAs" proxying doesn't appear to be working.

The Ranger audit logs show all operations being performed by the proxy user HTTP rather than the impersonated user.

I have configured the hbase-site.xml file with the following additional settings to support impersonation.

hadoop.proxyuser.HTTP.groups: hadoop

hadoop.proxyuser.HTTP.hosts: *

hadoop.security.authorization: true

hbase.rest.authentication.kerberos.keytab: /etc/security/keytabs/hbase.service.keytab

hbase.rest.authentication.kerberos.principal: HTTP/_HOST@<REALM>

hbase.rest.authentication.type: kerberos

hbase.rest.kerberos.principal: hbase/_HOST@<REALM>

hbase.rest.keytab.file: /etc/security/keytabs/hbase.service.keytab

hbase.rest.support.proxyuser: true

I have added an HTTP user to ranger and the user to an HBase policy giving 'RWCA' access and have added the same priveleges to HTTP on HBase: grant 'HTTP' 'RWCA'.

I am using the following curl command to query HBase.

curl -ivk --negotiate -u : -H "Content-Type: application/octet-stream" -X GET "http://<namenode>:60080/<resource>/234998/d:p?doAs=hadoopuser1" -o test4.jpg

I was expecting that ranger would apply the ACLs of the user being impersonated by the proxy user to limit access and provide audit logging in the same way as the webhdfs plugin. Is this possible?

If so am I missing something in my configuration?

Any advice appreciated.

Regards

Andrew

1 ACCEPTED SOLUTION

avatar
Rising Star

This issue was resolved by updating the zookeeper jaas config to use a keytab rather than a ticket cache.

The cache was expiring and the auth failing. I thing that I've learned is that the updated configs can take a while to propagate through the cluster. I had tried this config before but likely didn't wait long enough before discounting it as the solution.

Client {

com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true useTicketCache=false

keyTab="/etc/security/keytabs/hbase.service.keytab"

principal="hbase/<your_host>@<your_realm>"; };

View solution in original post

2 REPLIES 2

avatar
Rising Star

zookeeperlog-startup.txt

Update:

The above configuration now functions after a few service restarts.

However I am now experiencing another issue.

Requests to HBase rest now timeout after the specified number of client retries with a HTTP error of service unavailable.

The Zookeeper logs indicate the failure to establish a quorum. I have attached a log excerpt of the log file. I have three nodes. Zookeeper is running on both other nodes according to Ambari. Has anyone come across this problem before?

I have read some info that it may be related to DNS set up but have so far had no success.

2016-09-23 08:29:51,812 - DEBUG [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@609] - id: 3, proposed id: 3, zxid: 0x73000079ea, proposed zxid: 0x73000079ea
2016-09-23 08:29:51,814 - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2016-09-23 08:29:51,814 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@366] - IOException stack trace
java.io.IOException: ZooKeeperServer not running
	at org.apache.zookeeper.server.NIOServerCnxn.readLength(NIOServerCnxn.java:931)
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:237)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-09-23 08:29:51,816 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /34.45.6.3:34139 (no session established for client)
2016-09-23 08:29:51,816 - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2016-09-23 08:29:51,816 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@366] - IOException stack trace
java.io.IOException: ZooKeeperServer not running
	at org.apache.zookeeper.server.NIOServerCnxn.readLength(NIOServerCnxn.java:931)
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:237)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-09-23 08:29:51,816 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /34.45.6.2:60031 (no session established for client)
2016-09-23 08:29:51,817 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /34.45.6.2:60032
2016-09-23 08:29:51,817 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperSaslServer@78] - serviceHostname is 'namenode_1'
2016-09-23 08:29:51,817 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperSaslServer@79] - servicePrincipalName is 'zookeeper'
2016-09-23 08:29:51,817 - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperSaslServer@80] - SASL mechanism(mech) is 'GSSAPI'

avatar
Rising Star

This issue was resolved by updating the zookeeper jaas config to use a keytab rather than a ticket cache.

The cache was expiring and the auth failing. I thing that I've learned is that the updated configs can take a while to propagate through the cluster. I had tried this config before but likely didn't wait long enough before discounting it as the solution.

Client {

com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true useTicketCache=false

keyTab="/etc/security/keytabs/hbase.service.keytab"

principal="hbase/<your_host>@<your_realm>"; };