Support Questions

Find answers, ask questions, and share your expertise

Can't Connect to HBase through NiFi HBaseClientService because Zookeeper is Unable to Create the Connection

avatar
Rising Star

Description of my setup:

I have HDF sandbox node-A running NiFi while HDP sandbox node-B runs HBase. Both sandboxes have zookeper activated. In NiFi, I setup HBaseClientService for the PutHbaseJson processor. I added hbase-site.xml to my HBaseClientService under the Hadoop Configuration Files property.hbase-clientservice-error.pnghbase-config-hdp-setup.png Originally I acquired hbase-site.xml from HDP sandbox in path "/var/lib/ambari-server/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-site.xml". I modified hbase-site.xml: hbase.rootdir, hbase.zookeeper.quorum and added hbase.regionserver.port, so NiFi knows where to find HBase.

I have narrowed the problem down to a configuration problem either on NiFi or HBase side and not a networking issue because I have successfully tested that NiFi can dump data into HDFS using the PutHDFS processor.

The Problem:

A snippet of the stack trace from the nifi-app.log points out that hbase.MasterNotRunningException and can't get connection to Zookeeper. So, I conclude the problem is coming from Zookeeper. I don't understand why this problem is happening because I included both NiFi's and HBase's IP address for the zookeeper.quorum property value in the hbase-site.xml.

2017-03-11 16:18:22,066 ERROR [StandardProcessScheduler Thread-1] o.a.n.c.s.StandardControllerServiceNode HBase_1_1_2_ClientService[id=4d4142fa-0159-1000-a996-9604038fb2af] Failed to invoke @OnEnabled method due to org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions:
Sat Mar 11 16:18:22 UTC 2017, RpcRetryingCaller{globalStartTime=1489249087052, pause=100, retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeout

Update March 11, 2017:

I found other interesting insights from HDF and HDP sandbox logs

HDF sandbox logs:

NiFi logs:

The NiFi logs indicate there is a connection problem with Zookeeper. So, I go to check zookeeper logs.

o.a.h.h.zookeeper.RecoverableZooKeeper Unable to create ZooKeeper Connectionjava.net.UnknownHostException: sandbox.hortonworks.com: Name or service not knowno.a.h.hbase.zookeeper.ZooKeeperWatcher hconnection-0x30c93240x0, quorum=sandbox-hdp.hortonworks.com:2181, sandbox.hortonworks.com:2181, baseZNode=/hbase-unsecure Received unexpected KeeperException, re-throwing exceptionCaused by: org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeoutCaused by: org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeoutCaused by: org.apache.zookeeper.KeeperException$OperationTimeoutException: KeeperErrorCode = OperationTimeout

Zookeeper logs:

Zookeeper logs show there is no node for Primary Node and Cluster Coordinate. Also I see the client closed the socket, so it can’t read any more data from the client.

2017-03-11 19:55:49,257 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x15abb279ca20026 type:delete cxid:0x8 zxid:0x22f5 txntype:-1 reqpath:n/a Error Path:/nifi/leaders/Primary Node/_c_f44978a2-7e01-487d-878b-d99d79fee7a9-lock-0000000005 Error:KeeperErrorCode = NoNode for /nifi/leaders/Primary Node/_c_f44978a2-7e01-487d-878b-d99d79fee7a9-lock-00000000052017-03-11 19:55:49,257 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x15abb279ca20026 type:delete cxid:0x9 zxid:0x22f6 txntype:-1 reqpath:n/a Error Path:/nifi/leaders/Cluster Coordinator/_c_17d97f01-625d-459b-bcb9-850d98da42a6-lock-0000000005 Error:KeeperErrorCode = NoNode for /nifi/leaders/Cluster Coordinator/_c_17d97f01-625d-459b-bcb9-850d98da42a6-lock-0000000005EndOfStreamException: Unable to read additional data from client sessionid 0x15abb279ca20035, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745)

Ambari Infra Solr logs:

Solr logs indicate a config file of Zookeeper does not exist. Solr also cannot talk to zookeeper.znode.parent.

Caused by: org.apache.solr.common.SolrException: Unable to create core [hadoop_logs_shard1_replica1]Caused by: org.apache.solr.common.cloud.ZooKeeperException: Specified config does not exist in ZooKeeper: hadoop_logs2017-03-11 21:35:06,404 [OverseerCollectionConfigSetProcessor-97596120747868221-172.17.0.2:8886_solr-n_0000000000] WARN [  ] org.apache.solr.cloud.OverseerTaskProcessor (OverseerTaskProcessor.java:249) - Overseer cannot talk to ZK2017-03-11 21:35:06,409 [OverseerHdfsCoreFailoverThread-97596120747868221-172.17.0.2:8886_solr-n_0000000000] ERROR [  ] org.apache.solr.common.SolrException (SolrException.java:159) - OverseerAutoReplicaFailoverThread had an error in its thread work loop.:org.apache.solr.common.SolrException: Error reading cluster properties

HDP sandbox logs:

HBase logs:

HBase logs show there is a server refused connection on solr’s side.

WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://sandbox.hortonworks.com:6188/ws/v1/timeline/metricsERROR [org.apache.ranger.audit.queue.AuditBatchQueue0] impl.CloudSolrClient: Request to collection ranger_audits failed due to (0) java.net.ConnectException: Connection refused (Connection refused), retry? 0Caused by: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://172.17.0.2:8886/solr/ranger_audits_shard1_replica1]Caused by: org.apache.solr.client.solrj.SolrServerException: Server refused connection at: http://172.17.0.2:8886/solr/ranger_audits_shard1_replica1Caused by: java.net.ConnectException: Connection refused (Connection refused)

Zookeeper logs:

Zookeeper logs illustrate there is an invalid configuration and that only one server can be specified, it looks like this issue is related to Zookeeper Quorum. My question is then, when you have multiple zookeeper services running in your cluster, don’t you have to specify their <IP-address> or hostname in the hbase-site.xml on both sandboxes?

2017-03-11 20:07:57,177 - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /usr/hdp/current/zookeeper-server/conf/zoo.cfg2017-03-11 20:07:57,178 - ERROR [main:QuorumPeerConfig@287] - Invalid configuration, only one server specified (ignoring)2017-03-11 21:01:16,208 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exceptionEndOfStreamException: Unable to read additional data from client sessionid 0x15abefc1ba0000d, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745)2017-03-11 21:07:27,068 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x15abefc1ba00011 type:create cxid:0x1 zxid:0xdbe txntype:-1 reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers2017-03-11 21:07:27,068 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x15abefc1ba00011 type:create cxid:0x2 zxid:0xdbf txntype:-1 reqpath:n/a Error Path:/consumers/ranger_entities_consumer Error:KeeperErrorCode = NodeExists for /consumers/ranger_entities_consumer2017-03-11 21:07:27,068 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x15abefc1ba00011 type:create cxid:0x3 zxid:0xdc0 txntype:-1 reqpath:n/a Error Path:/consumers/ranger_entities_consumer/ids Error:KeeperErrorCode = NodeExists for /consumers/ranger_entities_consumer/ids

Ambari Infra Solr logs:

Solr logs indicate that it could not read the data. I believe this log message refers to solr not being able to read the data coming in from HDF sandbox.

[OverseerExitThread] ERROR [  ] org.apache.solr.cloud.Overseer$ClusterStateUpdater (Overseer.java:311) - could not read the dataorg.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /overseer_elect/leader2017-03-11 21:35:06,630 [OverseerHdfsCoreFailoverThread-97600331946393615-172.17.0.3:8886_solr-n_0000000001] ERROR [  ] org.apache.solr.common.SolrException (SolrException.java:159) - OverseerAutoReplicaFailoverThread had an error in its thread work loop.:org.apache.solr.common.SolrException: Error reading cluster propertiesCaused by: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /clusterprops.json

Any help for resolving this issue would be greatly appreciated.

Thank you.

Regards,

James

hbase-site.xml

nifi-app.txt

zookeeper-zookeeper-server-sandboxhortonworkscom.txt

6 REPLIES 6

avatar
Super Guru

Did you verify that node-A can connect to the ZooKeeper instance on node-B?

$ echo ruok | nc node-B 2181

You should see the response "imok". The error appears to imply that a network connection cannot be opened to ZooKeeper which suggests a firewall or port-forwarding issue.

avatar
Rising Star

Hello @Josh Elser,

Thank you for your prompt response. I executed instruction:

echo ruok | nc sandbox-hdp.hortonworks.com 2181

I received message: "imok"

I believe it is not a firewall issue because I tried

telnet <hostname-hdp> 2181 

against HDP sandbox from HDF sandbox.

I also tried

telnet <hostname-hdf> 2181

against HDF sandbox from HDP sandbox.

The output message I get from console illustrates HDF is connected to zookeeper port running on HDP:

[root@sandbox ~]# telnet 172.17.0.3 2181
Trying 172.17.0.3...
Connected to 172.17.0.3.
Escape character is '^]'.

Note: 172.17.0.3 is the ip address of HDP sandbox.

What approach would you take to debug a port forwarding issue?

Thanks,

James

avatar
Super Guru

If the above worked, it's not a port-forwarding issue.

avatar
Rising Star

Hi @Josh Elser

I am putting my GPS device data using MQTT to kafka and from kafka to HBase. but i am getting stuck while putting data into HBase. Could you suggest any flow which can help me.

MQTT --> Kafka --> HBase or HTTP --> Kafka --> HBase

And how to define PutHBaseJSON processor property.

I am running NiFi in localhost and i am putting data on cloud ambari HBase.

any help it would be great.

Thanks.

avatar
Contributor

@jmedel how did fix this issue? I face the same issue now...

avatar
Explorer

@jmedel Hello, I have the same issue. Have you solved it please? Thanks. Regards,