Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hortonworks 2.5: Ambari and Phoenix PQS Kerberos Setup

avatar
Explorer

Hi / for attention of @Josh Elser

This is a follow up question to the thread in

https://community.hortonworks.com/questions/47138/phoenix-query-server-connection-url-example.html

This is where @Christopher Bridge was getting Kerberos checksum errors.

I found that the two parameters:

  • hbase-site/phoenix.queryserver.kerberos.principal
  • hbase-site/phoenix.queryserver.keytab.file

are still set to hbase/_HOST and /etc/security/keytabs/hbase.service.keytab

This seems to be related to this Ambari JIRA:

https://issues.apache.org/jira/browse/AMBARI-16171

So I want to change it to the recommended setup of HTTP/_HOST and spnego.service.keytab, but the fields are locked in Ambari and they are not available for edit in the Kerberos security setup.

How should I change the properties?

Thanks

Kevin

1 ACCEPTED SOLUTION

avatar
Super Guru

@Kevin Ng, make sure you check the value of hadoop.proxyuser.HTTP.groups and hadoop.proxyuser.HTTP.hosts matches your deployment in HDFS's core-site configuration. You probably want to set the groups equal to "*" and the hosts should be a comma-separated list of FQDN's where you have PQS deployed.

You can also try enabling DEBUG logging for HBase and check the RegionServer log for an error. I would imagine that you will see an error about the HTTP/FQDN principal being disallowed to impersonate your end-user connecting to PQS.

View solution in original post

9 REPLIES 9

avatar
Super Guru

Very strange, @Kevin Ng. I'm not sure why the fix in AMBARI-16171 didn't properly update the principal and keytab in the UI. What version of Ambari are you using?

Maybe you can override these properties in the "Custom hbase-site" configuration section? I don't know enough about how Ambari is supposed to work here.

avatar
Explorer

Josh,

I managed to change the settings by removing the PQS Service and then adding it back.

However, now I'm getting another error:

java.lang.RuntimeException: java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Mon Nov 07 12:47:48 EST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68345: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=obfuscated.com,16020,1478537128525, seqNum=0 at org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:619) at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:299) at org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1748) at org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1728) at org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95) at org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46) at org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:120) at org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:118) at org.apache.phoenix.queryserver.server.Main$PhoenixDoAsCallback$1.run(Main.java:290) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.phoenix.queryserver.server.Main$PhoenixDoAsCallback.doAsRemoteUser(Main.java:287) at org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:648) at org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:117) at org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542) at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52) at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499) at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544) at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:745) Caused by: java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Mon Nov 07 12:47:48 EST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68345: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=obfuscated.com,16020,1478537128525, seqNum=0 at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2590) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2327) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2327) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:616) ... 24 more Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Mon Nov 07 12:47:48 EST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68345: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=obfuscated.com,16020,1478537128525, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:161) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:405) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2358)

Any ideas?

Thanks,

Kevin

avatar
Contributor

Did you ever find the resolution to this other error?

avatar
Super Guru

@Tom Stewart, would encourage you to open up your own question if you're experiencing problems.

In general, when you see this error, it's related to the client not providing Kerberos authentication (missing core-site.xml and/or hbase-site.xml on the client's classpath) or it's an authentication issue (where the client is "impersonating" another user which is disallowed via configuration).

avatar
Contributor

Sorry I was trying to get clarification for this *second* issue, as I was also hitting this same exact scenario. For others in the future, the response by @Josh Elser that is upvoted at the top (when sorted by Votes) also worked for me to correct this java.net.SocketTimeoutException when connecting to PQS. I was missing the PQS host in hadoop.proxyuser.HTTP.hosts. I didn't realize the upvoted response was for this second issue because the comment sorting was showing things out of order for me. I never did track down an impersonation error message, but I didn't increase tracing at all to try real hard at capturing the error.

avatar
Super Guru

@Kevin Ng, make sure you check the value of hadoop.proxyuser.HTTP.groups and hadoop.proxyuser.HTTP.hosts matches your deployment in HDFS's core-site configuration. You probably want to set the groups equal to "*" and the hosts should be a comma-separated list of FQDN's where you have PQS deployed.

You can also try enabling DEBUG logging for HBase and check the RegionServer log for an error. I would imagine that you will see an error about the HTTP/FQDN principal being disallowed to impersonate your end-user connecting to PQS.

avatar
Explorer

Josh,

Thanks ! That did the trick.

Cheers,

Kevin

avatar
Super Guru

Superb. I'm glad you got it worked out. Ambari should have done all of the above for you. It would be great if you could share the version information for your installation so we can figure out why you had to do this by hand.

avatar
Explorer

We're on Ambari Version 2.4.0.1

We upgraded from HDP 2.4.4 to 2.5

Hope that helps.