Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hbase region server getting down citing lease exception

avatar
Expert Contributor

Hbase region server getting down. Require help. Below are error log-

2016-03-23 23:14:32,250 ERROR [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Memstore size is 1286992 2016-03-23 23:14:32,250 INFO [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Closed CUTOFF4,C31\x0916,1458649721550.fab6ecb6588e89c84cff626593274c25. 2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] handler.CloseRegionHandler: Closed CUTOFF4,C31\x0916,1458649721550.fab6ecb6588e89c84cff626593274c25. 2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] handler.CloseRegionHandler: Processing close of MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc. 2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Closing MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.: disabling compactions & flushes 2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Updates disabled for region MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc. 2016-03-23 23:14:32,252 INFO [StoreCloserThread-MONO,O11\x09156779\x093\x09152446845,1449055882623.b51afac320641a8fde6a8f545d70e084.-1] regionserver.HStore: Closed 1 2016-03-23 23:14:32,252 INFO [StoreCloserThread-MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786.-1] regionserver.HStore: Closed 1 2016-03-23 23:14:32,252 INFO [RS_CLOSE_REGION-fsdata1c:60020-0] regionserver.HRegion: Closed MONO,O11\x09156779\x093\x09152446845,1449055882623.b51afac320641a8fde6a8f545d70e084. 2016-03-23 23:14:32,252 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-0] handler.CloseRegionHandler: Closed MONO,O11\x09156779\x093\x09152446845,1449055882623.b51afac320641a8fde6a8f545d70e084. 2016-03-23 23:14:32,252 INFO [RS_CLOSE_REGION-fsdata1c:60020-2] regionserver.HRegion: Closed MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786. 2016-03-23 23:14:32,253 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-2] handler.CloseRegionHandler: Closed MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786. 2016-03-23 23:14:32,254 INFO [StoreCloserThread-MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.-1] regionserver.HStore: Closed 1 2016-03-23 23:14:32,255 INFO [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Closed MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc. 2016-03-23 23:14:32,255 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] handler.CloseRegionHandler: Closed MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc. 2016-03-23 23:14:32,444 INFO [regionserver60020] regionserver.HRegionServer: stopping server fsdata1c.corp.arc.com,60020,1452067957740; all regions closed. 2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier interrupted while waiting for notification from AsyncSyncer thread 2016-03-23 23:14:32,444 INFO [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier exiting 2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread 2016-03-23 23:14:32,444 INFO [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 exiting 2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread 2016-03-23 23:14:32,444 INFO [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 exiting 2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread 2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 exiting 2016-03-23 23:14:32,445 DEBUG [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread 2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 exiting 2016-03-23 23:14:32,445 DEBUG [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread 2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 exiting 2016-03-23 23:14:32,445 DEBUG [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer 2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter exiting 2016-03-23 23:14:32,445 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/WALs/fsdata1c.corp.arc.com,60020,1452067957740 2016-03-23 23:14:32,454 ERROR [regionserver60020] regionserver.HRegionServer: Close and delete failed org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot complete file /apps/hbase/data/WALs/fsdata1c.corp.arc.com,60020,1452067957740/fsdata1c.corp.arc.com%2C60020%2C1452067957740.1458771271979. Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1201) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2994) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:647) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:484) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy18.complete(Unknown Source) at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at com.sun.proxy.$Proxy18.complete(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:404) at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:272) at com.sun.proxy.$Proxy19.complete(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2116) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2100) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:119) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.close(FSHLog.java:941) at org.apache.hadoop.hbase.regionserver.HRegionServer.closeWAL(HRegionServer.java:1185) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:998) at java.lang.Thread.run(Thread.java:744) 2016-03-23 23:14:32,555 INFO [regionserver60020] regionserver.Leases: regionserver60020 closing leases 2016-03-23 23:14:32,555 INFO [regionserver60020] regionserver.Leases: regionserver60020 closed leases 2016-03-23 23:14:32,863 WARN [LeaseRenewer:hbase@fsmaster1c.corp.arc.com:8020] hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33] for 1018 seconds. Will retry shortly ... org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot renew lease for DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33. Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1201) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4132) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:767) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:588) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy18.renewLease(Unknown Source) at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at com.sun.proxy.$Proxy18.renewLease(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:532) at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:272) at com.sun.proxy.$Proxy19.renewLease(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:791) at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417) at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442) at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298) at java.lang.Thread.run(Thread.java:744) 2016-03-23 23:14:33,865 WARN [LeaseRenewer:hbase@fsmaster1c.corp.arc.com:8020] hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33] for 1019 seconds. Will retry shortly ... org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot renew lease for DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33. Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1201) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4132) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:767) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:588) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy18.renewLease(Unknown Source) at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at com.sun.proxy.$Proxy18.renewLease(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:532) at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:272) at com.sun.proxy.$Proxy19.renewLease(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:791) at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417) at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442) at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298) at java.lang.Thread.run(Thread.java:744) 2016-03-23 23:14:34,055 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer$PeriodicMemstoreFlusher: regionserver60020.periodicFlusher exiting 2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Split Thread to finish... 2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish... 2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish... 2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Small Compaction Thread to finish... 2016-03-23 23:14:34,060 INFO [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x151389443dd01d0 2016-03-23 23:14:34,063 INFO [regionserver60020] zookeeper.ZooKeeper: Session: 0x151389443dd01d0 closed 2016-03-23 23:14:34,063 INFO [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down 2016-03-23 23:14:34,067 INFO [regionserver60020] zookeeper.ZooKeeper: Session: 0x251389443e8021c closed 2016-03-23 23:14:34,067 INFO [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down 2016-03-23 23:14:34,068 INFO [regionserver60020] regionserver.HRegionServer: stopping server fsdata1c.corp.arc.com,60020,1452067957740; zookeeper connection closed. 2016-03-23 23:14:34,068 INFO [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting 2016-03-23 23:14:34,068 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting java.lang.RuntimeException: HRegionServer Aborted at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2403) 2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@64b9f908 2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.HRegionServer: STOPPED: Shutdown hook 2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.ShutdownHook: Starting fs shutdown hook thread. 2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.ShutdownHook: Shutdown hook finished.

1 ACCEPTED SOLUTION

avatar

@Raja Ray, I recommend checking if your NameNode host is running out of disk space. Here is the main thing I noticed in that log:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot renew lease for DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33. Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

The NameNode periodically checks if there is remaining disk space available on all of the volumes it uses for writing edit logs. If not, then it enters safe mode automatically as a precaution.

View solution in original post

2 REPLIES 2

avatar

@Raja Ray, I recommend checking if your NameNode host is running out of disk space. Here is the main thing I noticed in that log:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot renew lease for DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33. Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

The NameNode periodically checks if there is remaining disk space available on all of the volumes it uses for writing edit logs. If not, then it enters safe mode automatically as a precaution.

avatar
Expert Contributor

Hi @Chris Nauroth,

Thanks for the solution, It worked. I increased disk space, turned off hdfs safemode and started regionserver. It worked.

Thanks,

Raja Ray