Created 08-03-2016 03:19 PM
While to connect Phoenix via jdbc to EC2 instance, getting the below error. This is only happening while connecting remotely, while sending jar to server and the connection works properly:
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=35, exceptions: Wed Aug 03 18:56:17 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote] Wed Aug 03 18:56:17 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: Wed Aug 03 18:56:18 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: Wed Aug 03 18:56:29 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=] ...... Wed Aug 03 19:10:37 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=]
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:879) at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1213) at org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:112) at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1902) at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:744) at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:186) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293) at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1236) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1891) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1860) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1860) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:131) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:233) at com.lendingpoint.hadoop.phoenixconnect.PhoenixTest.main(PhoenixTest.java:28) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions: Wed Aug 03 18:56:17 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=] Wed Aug 03 18:56:17 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: Wed Aug 03 18:56:18 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: Wed Aug 03 18:56:29 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=] Wed Aug 03 18:56:31 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: ........ Wed Aug 03 19:10:37 IST 2016, RpcRetryingCaller{globalStartTime=1470230766277, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=]
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917) at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:441) at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:463) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:813) ... 20 more Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=] at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1533) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1553) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1704) at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124) ... 24 more Caused by: com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=] at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:50918) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1564) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1502) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1524) ... 28 more Caused by: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=] at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:424) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:748) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:889) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1222) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) ... 33 more
====================================
Connection String Used:
jdbc:phoenix:<zookeeper ips>:2181:/hbase-unsecure [all zookeeper ips added, tried even with ec2 internal ips]
Hbase is working fine, able to connect through hbase shell, list command also working fine. Ambari shows no issues with Hbase or its components including phoenix client.
Hbase: 1.1.2
Phoenix: 4.4.0
HDP: 2.4.2
Non-kerberized cluster
Local External IP mapping is correctly done to domain.
Any Suggestions please...
Created 08-03-2016 03:27 PM
org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=ip-172-31-15-225.us-west-2.compute.internal/52.32.173.25:16000]
Can you connect to this IP address and port on your local machine using telnet/netcat? e.g.
telnet 52.32.173.25 16000
You should get a message "Connected to <host>" and not "Connection refused".
If you get the "Connection refused" message, it is likely some firewall or networking issue (like @Ankit Singhal pointed out)
Created 08-03-2016 03:23 PM
is 16000 port also open to allow any IP can connect to this host?
Created 08-03-2016 03:27 PM
org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=ip-172-31-15-225.us-west-2.compute.internal/52.32.173.25:16000]
Can you connect to this IP address and port on your local machine using telnet/netcat? e.g.
telnet 52.32.173.25 16000
You should get a message "Connected to <host>" and not "Connection refused".
If you get the "Connection refused" message, it is likely some firewall or networking issue (like @Ankit Singhal pointed out)
Created 10-25-2017 01:10 PM
I am getting the same problem. Locally everything works fine. But remote connection gives same error.:
Error:
10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=pdfdg3562/2.53.143.145:16020
Note: I don't give port 16020 anytime. I just give 2181 port for zookeeper to connect to the Hbase.
Also for me
telnet 2.53.143.145 16020 does NOT work fine
while
telnet 2.53.143.145 2181 works fine.
code snippet for connection:
conf.set("hbase.zookeeper.quorum", "2.53.143.145");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("zookeeper.znode.parent", "/hbase");
,
I too have same problem. When I try to connect remotely. I too get the same error:
10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=pkhukhsd237/2.53.143.145:16020
Note: for me telnet 2.53.143.145 16020 does not work
while
telnet 16020 2181 works fine.
Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.property.clientPort", "2181"); conf.set("hbase.client.retries.number", Integer.toString(1));
conf.set("hbase.zookeeper.quorum", "2.53.143.145");
conf.set("zookeeper.znode.parent", "/hbase");
Created 10-25-2017 03:26 PM
ZooKeeper is used to find where HBase is running. The fact that you cannot make a network connection to the host+port that the HBase RegionServer is running on implies one of two things:
1. You have a firewall blocking access to HBase
2. The HBase RegionServers are not listening on the remote interface (only listening on the loopback interface)
The first you need to investigate on your own (we cannot tell you if you're running a firewall). The latter you can check using `netstat` to see what interface the RegionServer process is LISTEN'ing on.
Created 10-25-2017 11:05 PM
Thanks much for prompt reply.
So Region server is running on port 16020. and following is the output of netstat.
$ netstat -an | egrep 16020
tcp6 0 0 2.53.143.145:16020 :::* LISTEN
tcp6 0 0 2.53.143.145:35036 2.53.143.145:16020 ESTABLISHED
tcp6 0 0 2.53.143.145:16020 2.53.143.145:35036 ESTABLISHED