Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HBase Master not starting in Kerberos secured cluster

avatar

For testing purposes I have set up a one node Kerberos secured cluster. Now I am trying to start HBase in this cluster, zookeeper starts, but HBase master giving me the error (the hostname is phoenix.docker.com as I futher want to install phoenix):

2017-07-11 23:37:06,304 INFO  [master/phoenix.docker.com/172.21.0.3:16000] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
2017-07-11 23:37:06,620 FATAL [phoenix:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/hbase":root:root:drwxr-xr-x

I am wondering why it tries to use the hbase user instead of the root user, as this is the user with which I start HBase with the `start-hbase.sh` script. Anyway, when I manually create th hdfs dir `/hbase` and giving permissions to the hbase user, I get the next error:

2017-07-11 23:43:11,512 INFO  [Thread-66] hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:197)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
	at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1998)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1356)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526)
2017-07-11 23:43:11,527 INFO  [Thread-66] hdfs.DFSClient: Abandoning BP-1176169754-172.21.0.3-1499814944202:blk_1073741837_1013
2017-07-11 23:43:11,557 INFO  [Thread-66] hdfs.DFSClient: Excluding datanode 172.21.0.3:50010
2017-07-11 23:43:11,604 WARN  [Thread-66] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

Checking the datanode logs it seems like there is a problem with the SASL connection:

2017-07-11 23:43:41,885 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected SASL data transfer protection handshake from client at /172.21.0.3:49322. Perhaps the client is running an older version of Hadoop which does not support SASL data transfer protection

Anyone has an idea how to solve this?

You can reproduce the error using my docker project: https://github.com/Knappek/docker-phoenix-secure

1 ACCEPTED SOLUTION

avatar

For any of those getting the same error, I have solved it finally. I forgot to properties in hdfs-site.xml and core-site.xml as you can see in this commit: https://github.com/Knappek/docker-hadoop-secure/commit/2214e8723048bc5403a006a57cbb9732d5cec838 .

View solution in original post

2 REPLIES 2

avatar

any help?

avatar

For any of those getting the same error, I have solved it finally. I forgot to properties in hdfs-site.xml and core-site.xml as you can see in this commit: https://github.com/Knappek/docker-hadoop-secure/commit/2214e8723048bc5403a006a57cbb9732d5cec838 .