Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

SQL processors (jdbc:phoenix) hanging in Secured HDP

avatar
Expert Contributor

Hello,

I have secured HDP cluster and a nifi flow that ingests JSON and inserts it into a Phoenix Table Getfile->ConvertJsonToSQL->ReplaceText(insert to update)->PutSQL. Prior to enabling kerberos the flow was working fine.

After enabling kerberos, I changed the connection pool to:

jdbc:phoenix:localhost:2181:/hbase-secure:hbase-dev@DEV.COM:/etc/security/keytabs/hbase.headless.keytab

This connection URL works fine with sqlline/phoenix client

Now when I start the flow, it initially hangs for a while and I end up with the below logs

Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Thu Jan 26 06:36:52 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68021: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=ip-123-431-1-123.ec2.internal,16020,1485391803237, seqNum=0
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549) ~[na:na]
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388) ~[na:na]
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[na:na]
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:231) ~[na:na]
	... 18 common frames omitted
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68130: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=ip-172-40-1-51.ec2.internal,16020,1485391803237, seqNum=0
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) ~[na:na]
	at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) ~[na:na]
	... 3 common frames omitted

Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to ip-172-40-1-51.ec2.internal/172.40.1.51:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to ip-172-40-1-51.ec2.internal/172.40.1.51:16020 is closing. Call id=9, waitTime=3
	at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1258) ~[na:na]
	at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1229) ~[na:na]
	at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) ~[na:na]
	at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) ~[na:na]
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32741) ~[na:na]
	at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:373) ~[na:na]
	at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:200) ~[na:na]
	at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) ~[na:na]
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) ~[na:na]
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364) ~[na:na]
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338) ~[na:na]
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ~[na:na]
	... 4 common frames omitted

Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to ip-172-40-1-51.ec2.internal/172.40.1.51:16020 is closing. Call id=9, waitTime=3
	at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047) ~[na:na]
	at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846) ~[na:na]
	at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574) ~[na:na]

Any ideas on what I could be missing? Similar behaviour with executeSQL.

Thanks!

1 ACCEPTED SOLUTION

avatar
Master Guru

The errors look similar to the ones in some other HCC posts:

https://community.hortonworks.com/questions/50301/call-for-help-fail-to-run-puthdfshbase-1-1-2-clien...

https://community.hortonworks.com/questions/66756/spark-hbase-connection-issue.html

Do the suggestions there help at all? If there is an issue with adding JARs to the classpath, you can do this via the "Database driver location(s)" property. If there is an issue with including Hadoop configuration files, you can try adding them to the Database driver location(s) property as well, although I don't know if that will work.

View solution in original post

4 REPLIES 4

avatar
Super Collaborator

java.net.SocketTimeoutException, is port 2181 open. Are you running nifi and the phoenix server on the same machine?

avatar
Expert Contributor

No but all ports are open between the two machines. When I run phoenix-client (which works), i use the same node NiFI is running on so i don't think its a connection issue:

Works from nifi node:

/usr/hdp/current/phoenix-client/bin/sqlline.py zk:2181:/hbase-secure:hbase-dev-hadoop@DEV.COM:/etc/security/keytabs/hbase.headless.keytab

avatar
Master Guru

The errors look similar to the ones in some other HCC posts:

https://community.hortonworks.com/questions/50301/call-for-help-fail-to-run-puthdfshbase-1-1-2-clien...

https://community.hortonworks.com/questions/66756/spark-hbase-connection-issue.html

Do the suggestions there help at all? If there is an issue with adding JARs to the classpath, you can do this via the "Database driver location(s)" property. If there is an issue with including Hadoop configuration files, you can try adding them to the Database driver location(s) property as well, although I don't know if that will work.

avatar
Contributor

Just commenting on this for future visitors to this post:

Only files that end in ".jar" are picked up by the Driver Class Loader,

Here's the relevant source code from DBCPConnectionPool.java

protected ClassLoader getDriverClassLoader(String locationString, String drvName) throws InitializationException {
if (locationString != null && locationString.length() > 0) {
try {
// Split and trim the entries
final ClassLoader classLoader = ClassLoaderUtils.getCustomClassLoader(
locationString,
this.getClass().getClassLoader(),
(dir, name) -> name != null && name.endsWith(".jar")