Support Questions

Find answers, ask questions, and share your expertise

HBase Master Failed to start ??

avatar
Super Collaborator

Please see the log below:-

2016-01-29 13:00:41,580 FATAL [eu-lamp-dev-xl-0019-hadoop-sec-master:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.ranger.authorization.hadoop.exceptions.RangerAccessControlException): Permission denied: principal{user=hbase,groups: [hadoop]}, access=null, /apps/hbase/data
        at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:327)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
        at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3856)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2081)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2077)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2075)
        at org.apache.hadoop.ipc.Client.call(Client.java:1427)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
        at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:424)
        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:126)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:649)
        at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
        at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
        at java.lang.Thread.run(Thread.java:745)
2016-01-29 13:00:41,583 FATAL [eu-lamp-dev-xl-0019-hadoop-sec-master:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(org.apache.ranger.authorization.hadoop.exceptions.RangerAccessControlException): Permission denied: principal{user=hbase,groups: [hadoop]}, access=null, /apps/hbase/data
        at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:327)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
        at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3856)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2081)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2077)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2075)
        at org.apache.hadoop.ipc.Client.call(Client.java:1427)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
        at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:424)
        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:126)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:649)
        at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
        at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
        at java.lang.Thread.run(Thread.java:745)
2016-01-29 13:00:41,584 INFO  [eu-lamp-dev-xl-0019-hadoop-sec-master:16000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown.
2016-01-29 13:00:41,585 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] regionserver.HRegionServer: Stopping infoServer
2016-01-29 13:00:41,610 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:16010
2016-01-29 13:00:41,612 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] regionserver.HRegionServer: stopping server eu-lamp-dev-xl-0019-hadoop-sec-master,16000,1454072438556
2016-01-29 13:00:41,612 DEBUG [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] zookeeper.MetaTableLocator: Stopping MetaTableLocator
2016-01-29 13:00:41,613 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x24fb78762b2015b
2016-01-29 13:00:41,621 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] zookeeper.ZooKeeper: Session: 0x24fb78762b2015b closed
2016-01-29 13:00:41,621 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-01-29 13:00:41,621 DEBUG [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] ipc.AbstractRpcClient: Stopping rpc client
2016-01-29 13:00:41,631 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] regionserver.HRegionServer: stopping server eu-lamp-dev-xl-0019-hadoop-sec-master,16000,1454072438556; all regions closed.
2016-01-29 13:00:41,632 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] hbase.ChoreService: Chore service for: eu-lamp-dev-xl-0019-hadoop-sec-master,16000,1454072438556 had [] on shutdown
2016-01-29 13:00:41,632 DEBUG [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] master.HMaster: Stopping service threads
2016-01-29 13:00:41,646 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] ipc.RpcServer: Stopping server on 16000
2016-01-29 13:00:41,646 INFO  [RpcServer.listener,port=16000] ipc.RpcServer: RpcServer.listener,port=16000: stopping
2016-01-29 13:00:41,649 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2016-01-29 13:00:41,649 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2016-01-29 13:00:41,680 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] zookeeper.RecoverableZooKeeper: Node /hbase-unsecure/rs/eu-lamp-dev-xl-0019-hadoop-sec-master,16000,1454072438556 already deleted, retry=false
2016-01-29 13:00:41,694 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] zookeeper.ZooKeeper: Session: 0x24fb78762b2015a closed
2016-01-29 13:00:41,694 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-01-29 13:00:41,694 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] regionserver.HRegionServer: stopping server eu-lamp-dev-xl-0019-hadoop-sec-master,16000,1454072438556; zookeeper connection closed.
2016-01-29 13:00:41,694 INFO  [master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000] regionserver.HRegionServer: master/eu-lamp-dev-xl-0019-hadoop-sec-master/10.8.7.62:16000 exiting
[hdfs@eu-lamp-dev-xl-0019-hadoop-sec-master hbase]$ hdfs dfs -ls /apps/hbase
Found 2 items
drwxr-xr-x   - hbase hdfs          0 2016-01-11 12:18 /apps/hbase/data
drwx--x--x   - hbase hdfs          0 2015-07-22 16:26 /apps/hbase/staging
[hdfs@eu-lamp-dev-xl-0019-hadoop-sec-master hbase]$ hdfs dfs -ls /apps/hbase/data
Found 10 items
drwxr-xr-x   - hbase hdfs          0 2015-09-10 10:48 /apps/hbase/data/.hbase-snapshot
drwxr-xr-x   - hbase hdfs          0 2016-01-11 12:18 /apps/hbase/data/.tmp
drwxr-xr-x   - hbase hdfs          0 2016-01-11 12:18 /apps/hbase/data/MasterProcWALs
drwxr-xr-x   - hbase hdfs          0 2016-01-12 11:31 /apps/hbase/data/WALs
drwxr-xr-x   - hbase hdfs          0 2015-09-10 10:48 /apps/hbase/data/archive
drwxr-xr-x   - hbase hdfs          0 2015-07-27 14:47 /apps/hbase/data/corrupt
drwxr-xr-x   - hbase hdfs          0 2015-07-22 16:32 /apps/hbase/data/data
-rwxr-xr-x   3 hbase hdfs         42 2015-07-22 16:32 /apps/hbase/data/hbase.id
-rwxr-xr-x   3 hbase hdfs          7 2015-07-22 16:32 /apps/hbase/data/hbase.version
drwxr-xr-x   - hbase hdfs          0 2016-01-28 13:32 /apps/hbase/data/oldWALs
[hdfs@eu-lamp-dev-xl-0019-hadoop-sec-master hbase]$ hdfs dfs -ls /
Found 10 items
drwxrwxrwx   - yarn   hadoop          0 2016-01-04 16:35 /app-logs
drwxrwxrwx   - hdfs   hdfs            0 2015-07-22 16:28 /apps
drwxr-xr-x   - hdfs   hdfs            0 2015-07-22 16:26 /hdp
drwxr-xr-x   - mapred hdfs            0 2015-07-22 16:23 /mapred
drwxrwxrwx   - hdfs   hdfs            0 2015-07-22 16:23 /mr-history
drwxr-xr-x   - hdfs   hdfs            0 2016-01-26 09:23 /sources
drwxr-xr-x   - hdfs   hdfs            0 2015-07-22 16:22 /system
drwxr-xr-x   - hdfs   hdfs            0 2015-10-27 11:22 /test
drwxrwxrwx   - hdfs   hdfs            0 2016-01-14 10:19 /tmp
drwxr-xr-x   - hdfs   hdfs            0 2016-01-07 10:21 /user
[hdfs@eu-lamp-dev-xl-0019-hadoop-sec-master hbase]$ hdfs dfs -ls /apps
Found 2 items
drwxrwxrwx   - hdfs hdfs          0 2015-07-22 16:26 /apps/hbase
drwxr-xr-x   - hdfs hdfs          0 2015-07-22 16:28 /apps/hive
[hdfs@eu-lamp-dev-xl-0019-hadoop-sec-master hbase]$ hdfs dfs -ls /apps/hbase
Found 2 items
drwxr-xr-x   - hbase hdfs          0 2016-01-11 12:18 /apps/hbase/data
drwx--x--x   - hbase hdfs          0 2015-07-22 16:26 /apps/hbase/staging
[hdfs@eu-lamp-dev-xl-0019-hadoop-sec-master hbase]$
1 ACCEPTED SOLUTION

avatar
Super Collaborator

I have added the below property in hdfs-site.xml to check ACL's in hdfs filesystem and just restart the hdfs , yarn services in ambari resolves the issue.

<property>

<name>dfs.namenode.acls.enabled</name> <value>true</value>

</property>

View solution in original post

10 REPLIES 10

avatar
Master Mentor
@subhash parise

See this

org.apache.hadoop.ipc.RemoteException(org.apache.ranger.authorization.hadoop.exceptions.RangerAccessControlException): Permission denied: principal{user=hbase,groups: [hadoop]}, access=null, /apps/hbase/data at

I am guessing that you enabled the ranger plugin for HBASE.

User hbase does not have permission to access /apps/hbase/data

avatar
Super Collaborator

Hi Neeraj,

hbase user is having the permission to access /apps/hbase/data

please see the below output:

[hdfs@eu-lamp-dev-xl-0019-hadoop-sec-master hbase]$ hdfs dfs -ls /apps/hbase

Found 2 items

drwxr-xr-x - hbase hdfs 0 2016-01-11 12:18 /apps/hbase/data

avatar
Master Mentor

@subhash parise My bad as I did not see the full details. In future, you can click code button after pasting the error log for better formatting.

Can you check HDFS and HBASE policies in ranger console ?

avatar
Super Collaborator

@Neeraj Sabharwal

Please find the attachment for ranger hbase policy ranger.png. did n't find any hdfs related policy in ranger console.

avatar
Master Mentor

@subhash parise Could you enable HDFS plugin and then try to bring up HBASE? please...

avatar
Master Mentor

@subhash parise As per my understanding, if you enable HDFS plugin for ranger then you don't need to set that parameter. It would be nice to test this.

avatar
Master Mentor
@subhash parise

do you have a group called hadoop? Can you make sure hbase user belongs to the hadoop group? Confirm it belongs to hdfs group as well.

avatar
Super Collaborator

I have added the below property in hdfs-site.xml to check ACL's in hdfs filesystem and just restart the hdfs , yarn services in ambari resolves the issue.

<property>

<name>dfs.namenode.acls.enabled</name> <value>true</value>

</property>

avatar
Master Mentor
@subhash parise

Thanks for the final post. I have accepted this as best answer.