Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

hbase connection refused

avatar
Explorer

Dear all,

I installed CDH5 with all services installed upon 3 servers, one as namenode and other two as datanodes.
Recently I wrote a MapReduce program which transfer data from HDFS to HBase table, when I ran the program under Hbase master node, it worked quite well:

.
.
/parcels/CDH/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//hadoop-streaming-2.5.0-cdh5.3.2.jar
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/lib/native
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.8.1.el6.x86_64
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:user.name=root
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/maintainer/maprtest/busdatahbase
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x8eae88f, quorum=localhost:2181, baseZNode=/hbase
15/03/25 14:06:02 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 14:06:02 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/03/25 14:06:02 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14be4ad837908bd, negotiated timeout = 60000
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Looking up current regions for table Busdatatest
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Configuring 1 reduce partitions to match current region count
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Writing partition information to /tmp/partitions_2f60547f-f4ef-437b-af98-56e12c3cc121
15/03/25 14:06:03 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
15/03/25 14:06:03 INFO compress.CodecPool: Got brand-new compressor [.deflate]
15/03/25 14:06:03 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Incremental table Busdatatest output configured.
15/03/25 14:06:04 INFO client.RMProxy: Connecting to ResourceManager at trafficdata0.sis.uta.fi/153.1.62.179:8032
15/03/25 14:06:05 INFO input.FileInputFormat: Total input paths to process : 6
15/03/25 14:06:06 INFO mapreduce.JobSubmitter: number of splits:6
15/03/25 14:06:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1426162558224_0095
15/03/25 14:06:06 INFO impl.YarnClientImpl: Submitted application application_1426162558224_0095
15/03/25 14:06:06 INFO mapreduce.Job: The url to track the job: http://trafficdata0.sis.uta.fi:8088/proxy/application_1426162558224_0095/
15/03/25 14:06:06 INFO mapreduce.Job: Running job: job_1426162558224_0095
15/03/25 14:06:19 INFO mapreduce.Job: Job job_1426162558224_0095 running in uber mode : false
15/03/25 14:06:19 INFO mapreduce.Job:  map 0% reduce 0%
15/03/25 14:06:30 INFO mapreduce.Job:  map 5% reduce 0%
15/03/25 14:06:31 INFO mapreduce.Job:  map 13% reduce 0%
15/03/25 14:06:32 INFO mapreduce.Job:  map 21% reduce 0%
^C15/03/25 14:06:33 INFO mapreduce.Job:  map 23% reduce 0%
.
.


And I can check the result, it's right.
However, when I tried to run the same program under regionserver node, it shows cannot connect, with the log follows:

 

.
.
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.el6.x86_64
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:user.name=hdfs
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:user.home=/var/lib/hadoop-hdfs
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/maintainer/myjars
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x474fc0cb, quorum=localhost:2181, baseZNode=/hbase
15/03/25 10:16:54 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 10:16:54 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/25 10:16:54 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
15/03/25 10:16:54 INFO util.RetryCounter: Sleeping 1000ms before retry #0...
15/03/25 10:16:55 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 10:16:55 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/25 10:16:55 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
15/03/25 10:16:55 INFO util.RetryCounter: Sleeping 2000ms before retry #1...
15/03/25 10:16:56 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 10:16:56 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
.
.
.

 

Actually I checked the client configuration under '/etc/hbase/conf/hbase-site.xml', it has set the variable to connect to master node:

 

.
.
  <property>
    <name>zookeeper.znode.rootserver</name>
    <value>root-region-server</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>trafficdata0.sis.uta.fi</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>
.
.


But when running program, it always connect to localhost, and it seems to be the problem of zookeeper? do you know why this happened, please help.

Thanks.

Br,
YIibin

3 REPLIES 3

avatar
New Contributor
Please replace the attribute value ot "hbase.zookeeper.quorum" with the IP Address. This causes the issue at times.

avatar
New Contributor

This issue can be resolved in 2 ways. One with modificaiton in code, and another one with modification in config file.

 

1) Code modification:  Set the attribute value for "hbase.zookeeper.quorum" in the code itself

 

HBaseConfiguration config = new HBaseConfiguration();
config.clear();
config.set("hbase.zookeeper.quorum", "192.168.15.20");
config.set("hbase.zookeeper.property.clientPort","2181");

 

2) Modify the /etc/hosts file as below

 

Remove the localhost entry from that file and put the localhost entry in front of hbase server ip . 

For example, your hbase server's /etc/hosts files seems like this:

 

127.0.0.1 localhost

192.166.66.66 xyz.hbase.com hbase

 

You have to change it like this by removing localhost:

 

# 127.0.0.1 localhost # line commented out

192.166.66.66 xyz.hbase.com hbase localhost # note: localhost added here

 

This is because when remote machine asks hbase server machine that where HMaster is running, it tells that it is running on localhost mean on this machine. So if we continue to place the entry of 127.0.01 then hbase server returns this adress and remote machine start to find HMaster on its own machine.

avatar
New Contributor

I got KeeperErrorCode = ConnectionLoss for /hbase exception but following configuration work for me:
Change in hbase-env.sh:
export HBASE_MANAGES_ZK=true

<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>zknode1,zknode2,zknode3,zknode4</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>

Add your all zookeeper nodes in hbase.zookeeper.quorum.

If above config not work for you then add following property in Advanced ams-hbase-security-site:

zookeeper.znode.parent= /hbase