Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Phoenix Query Server Connection URL example?

avatar
Master Guru

I am looking for an example PQS connection url which has kerberos params. My cluster is kerberized and the example on the apache phoenix site shows this as example:

jdbc:phoenix:thin:url=<scheme>://<server-hostname>:<port>[...]

The site documents the kerberos params. However I would like to see a working example. Any example appreciated.

Extra Info:

Configurations relating to server connecting to a secure cluster.
PropertyDescriptionDefault
hbase.security.authenticationWhen set to "kerberos", the server will attempt to log in before initiating Phoenix connections.Specified hbase-default.xml
phoenix.queryserver.keytab.fileThe key to look for keytab file.unset
phoenix.queryserver.kerberos.principalThe kerberos principal to use when authenticating.unset
phoenix.queryserver.dns.nameserverThe DNS hostnamedefault
phoenix.queryserver.dns.interfaceThe name of the network interface to query for DNS.default
1 ACCEPTED SOLUTION

avatar
Super Guru

There are two sides here. The documentation that you listed and @ssoldatov confirmed are accurate for PQS to connect to HBase. The other side, which is likely missing as "official" Apache Phoenix documentation, is the thin-client configuration properties.

These properties are presently available at http://calcite.apache.org/avatica/docs/client_reference.html. The sqlline-thin.py script will automatically configure them for you, but you would have to provide them when using the thin JDBC driver directly. In practice, it would look something like the following when you have already performed a Kerberos login

jdbc:phoenix:thin:url=<scheme>://<server-hostname>:<port>;authentication=SPNEGO

Alternatively, you can provide a principal and keytab which the thin driver will use to login automatically:

jdbc:phoenix:thin:url=<scheme>://<server-hostname>:<port>;authentication=SPNEGO;principal=my_user;keytab=/home/my_user/my_user.keytab

View solution in original post

18 REPLIES 18

avatar
Super Collaborator

Those parameters are for the QueryServer configuration. They need to be added to the hbase-site.xml, so Query Server will be able to connect to the HBase. As far as I know there is no kerberos configuration parameters in the jdbc connection string for PQS

avatar
Master Guru

@ssoldatov I don't understand what you mean when a kerberos princple is different per user. This needs to be defined as it is today for the jdbc connection as @Constantin Stanca provided. I am looking for example of thin jdbc url example with kerberos.

avatar
Super Collaborator

@Sunile Manjee PQS works like most of others hadoop services. It requires its own keytab and principal to access hdfs/hbase. (please note that documentation says Configurations relating to server connecting to a secure cluster.) Once you configured PQS kerberos keytab/principal any client may work with PQS without restrictions. No client auth is supported at the moment. It will be added in HDP 2.5 release. More details can be found at https://issues.apache.org/jira/browse/PHOENIX-2792

avatar
Master Guru

@ssoldatov I am aware it is in 2.5 and that is exatly where I am testing this. I need example URL

avatar
Super Collaborator

This implementation is using kerberos ticket cache, so you need to use kinit before running client. No changes in the URL is required. Also make sure that the hbase-site.xml which is in the classpath of the client has hbase.security.authentication set to kerberos

avatar

PHOENIX-2792 says fixed in 4.8.0, HDP 2.5 has 4.7.0.2.5.0.0-1245, are Hortonworks confident this feature made it into the HDP 2.5 distribution?

avatar
Super Guru

There are two sides here. The documentation that you listed and @ssoldatov confirmed are accurate for PQS to connect to HBase. The other side, which is likely missing as "official" Apache Phoenix documentation, is the thin-client configuration properties.

These properties are presently available at http://calcite.apache.org/avatica/docs/client_reference.html. The sqlline-thin.py script will automatically configure them for you, but you would have to provide them when using the thin JDBC driver directly. In practice, it would look something like the following when you have already performed a Kerberos login

jdbc:phoenix:thin:url=<scheme>://<server-hostname>:<port>;authentication=SPNEGO

Alternatively, you can provide a principal and keytab which the thin driver will use to login automatically:

jdbc:phoenix:thin:url=<scheme>://<server-hostname>:<port>;authentication=SPNEGO;principal=my_user;keytab=/home/my_user/my_user.keytab

avatar
Master Guru

@Josh Elser That is exactly what I needed. Now I need to test it. If I run into issues will open another HCC question. Thanks again Josh!

avatar

Hi Josh,

I'm using HDP 2.5 (upgraded from 2.4.2) and have attempted to follow the above instructions but it's not working from Java client or on the master (./sqline.py works but not "thin").

I've run kinit,

kinit -k -t /etc/security/keytabs/hbase.headless.keytab hbase-cluster1

Then I run (obfuscated),

./sqlline-thin.py http://b3e073.*****.com:8765;authentication=SPNEGO 

I also tried,

./sqlline-thin.py http://localhost:8765;authentication=SPNEGO

The (obfuscated) output is,

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/phoenix/phoenix-4.7.0.2.5.0.0-1245-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/11/03 05:37:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:thin:url=http://b3e073.*****.com:8765;serialization=PROTOBUF;authentication=SPNEGO none none org.apache.phoenix.queryserver.client.Driver
Connecting to jdbc:phoenix:thin:url=http://b3e073.*****.com:8765;serialization=PROTOBUF;authentication=SPNEGO
java.lang.RuntimeException: Failed to execute HTTP Request, got HTTP/404
at org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientSpnegoImpl.send(AvaticaCommonsHttpClientSpnegoImpl.java:148)
at org.apache.calcite.avatica.remote.RemoteProtobufService._apply(RemoteProtobufService.java:44)
at org.apache.calcite.avatica.remote.ProtobufService.apply(ProtobufService.java:81)
at org.apache.calcite.avatica.remote.Driver.connect(Driver.java:175)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:804)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
at org.apache.phoenix.queryserver.client.SqllineWrapper$1.run(SqllineWrapper.java:78)
at org.apache.phoenix.queryserver.client.SqllineWrapper$1.run(SqllineWrapper.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:75)
sqlline version 1.1.8

any idea where I'm going wrong?

thanks, Chris.