Member since
03-06-2017
28
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1343 | 10-11-2016 03:23 PM |
02-19-2018
05:06 AM
In this case the source host (Appmaster) was unable to received any response from destination as this is a faulty routing issue and packet was just lost.
I checked the source code for the package "org.apache.hadoop.ipc.Client" and found that it uses PING utility to check response from destination host and keep trying until it received any responce. This is clearly mentioned in the this link http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.action/0.2.0/org/apache/hadoop/ipc/Client.java
"This class sends a ping to the remote side when timeout on reading. If no failure is detected, it retries until at least a byte is read."
So, due to routing issue and it keep trying until the job was killed. thanks to grepcode.com for facilitating easy way to read source code.
... View more
02-11-2018
03:49 PM
Dear friends, Need some help to know root cause of this issue. In
a sqoop job failures, it has been noticed that the app master wasn't
able to connect to a NM due to connection time out issues and it kept on
retrying the connection for close to 2 hrs, until killed manually. The timeout was due a temporary network issue between appmaster and a nodemanager. Here is overview of what happend: RM <-----> NM01(hdpn01) Network ok RM <-----> NM08(hdpn08) Network ok NM01 <---X---> NM08 Network failed AppMaster container launched at NM01 node. Here is brief log: INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0
AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0
ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0 2018-02-03 21:12:51,734 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1517675224254_1052: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2776576, vCores:1> knownNMs=24 2018-02-03 21:12:52,751 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1 2018-02-03 21:12:52,793 INFO [RMCommunicator Allocator] org.apache.hadoop.yarn.util.RackResolver: Resolved hdpn08.ztpl.net to /default-rack 2018-02-03 21:12:52,797 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1517675224254_1052_02_000002 to attempt_1517675224254_1052_m_000000_1000 2018-02-03 21:12:52,799 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 Sc 2018-02-03 21:24:04,379 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1517675224254_1052_m_000000_1000 TaskAttempt Transitioned from ASSIGNED to KILL_CONTAINER_CLEANUP
2018-02-03 21:24:04,380 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1517675224254_1052_m_000000_1000: Container expired since it was unused
2018-02-03 21:24:04,381 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_1517675224254_1052_02_000002 taskAttempt attempt_1517675224254_1052_m_000000_1000
2018-02-03 21:24:53,616 WARN [ContainerLauncher #0] org.apache.hadoop.ipc.Client: Failed to connect to server: hdpn08.ztpl.net/172.20.1.108:45454: retries get failed due to exceeded maximum allowed retries number: 0
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Clien ...................... 2018-02-03 22:43:58,911 WARN [ContainerLauncher #0] org.apache.hadoop.ipc.Client: Failed to connect to server: hdpn08.ztpl.net/172.20.1.108:45454: retries get failed due to exceeded maximum allowed retries number: 0 java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) Why
did AM keep retrying the connection to the NM on hdpn08 for 2 hrs. till
the time it was manually killed? If it wasn’t killed it would have
continued for much longer. Why did AM not stop trying after x number of tries? Is there any max attempt properties for application master? Why did AM not spin out another map task to compensate for this problematic task? Thanks
... View more
Labels:
- Labels:
-
Apache Sqoop
-
Apache YARN
08-02-2017
05:22 AM
1 Kudo
I found the solution. There was incorrect hostname entry on /etc/hosts file on Resource manager node and as result nodemanager registration failed as resource manager does not accept request from a unauthorized host. Thanks Khireswar
... View more
08-01-2017
03:30 PM
1 Kudo
It has been observed that nodemanger got started successfully, but died after 8-10 minute with follow error: 2017-08-01 08:34:55,349 INFO client.ConfiguredRMFailoverProxyProvider (ConfiguredRMFailoverProxyProvider.java:performFailover(100)) - Failing over to rm1 2017-08-01 08:34:55,373 WARN retry.RetryInvocationHandler (RetryInvocationHandler.java:handleException(217)) - Exception while invoking ResourceTrackerPBClientImpl.registerNodeManager over rm1. Not retrying because failovers (30) exceeded maximum allowed (30) 2017-08-01 08:34:55,373 ERROR nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:serviceStart(229)) - Unexpected error starting NodeStatusUpdater 2017-08-01 08:34:55,373 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.authorize.AuthorizationException: User nm/zaldn8.r1-core.r1.zal.net@DEV.HADOOP.R1-CORE.R1.ZAL.NET (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.yarn.server.api.ResourceTrackerPB: this service is only accessible by nm/172.20.176.119@DEV.HADOOP.R1-CORE.R1.ZAL.NET org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.authorize.AuthorizationException: User nm/zaldn8.r1-core.r1.zal.net@DEV.HADOOP.R1-CORE.R1.ZAL.NET (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.yarn.server.api.ResourceTrackerPB: this service is only accessible by nm/172.20.176.119@DEV.HADOOP.R1-CORE.R1.ZAL.NET at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:230) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:302) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:547) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:594) Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User nm/zaldn8.r1-core.r1.zal.net@DEV.HADOOP.R1-CORE.R1.ZAL.NET (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.yarn.server.api.ResourceTrackerPB: this service is only accessible by nm/172.20.176.119@DEV.HADOOP.R1-CORE.R1.ZAL.NET at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:70) at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) at com.sun.proxy.$Proxy84.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:305) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:224) ... 6 more It seems like this is DNS issue, but hostname -f command return correct hostname. Do you have suggestion how to resolve the issue.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
-
Kerberos
-
Security
02-08-2017
12:24 PM
I am using HDP 2.4. Also I noticed that when permission is granted it shows the database with tables, but when permision is revoked databases are shown with no tables.
... View more
02-07-2017
12:58 PM
1 Kudo
I am working with ranger hive policies and seeing a wired behaviour. We have granted access to a group only to specific databases but users of the group can see all database, although they see them with no tables as they don't have access to them.
Configuration on Ranger seems to be fine. Is it expected behaviur in ranger or we can restrict from viewing the database? thanks
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
10-11-2016
03:23 PM
1 Kudo
I have received e-credit on my examlocal account. Now I will be able reschedule my exam again.
... View more
10-11-2016
08:14 AM
1 Kudo
I appeared HDPCA exam on 1st october, but due to some technical issue on examlocal environment my exam could not be loaded. When I chat with support team, they told me that I will get refund of my exam fees within 2 business day. But I have not received the refund yet. I have sent mail to certification@hortonworks.com and ExamSupport@psionline.com. but no one responding me. Can some one guide me on any alternative to contact the exam authority.
... View more
Labels:
- Labels:
-
Certification
10-07-2016
11:15 AM
All service are up after a restart. But this is a recurring issue. So posting the log to check if this is bug or configuration issue.Habse master and regionserver goes down with following log. ------------------------------- 2016-10-07 03:01:02,867 INFO [imp1tvhdpmst1.corp.test.com,16000,1475152605958_splitLogManager__ChoreService_1] master.SplitLogManager$TimeoutMonitor: Chore: SplitLogManager Timeout Monitor missed its start time
2016-10-07 03:01:02,898 INFO [main-SendThread(imp1tvhdpmst2.corp.test.com:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 65745ms for sessionid 0x1572d7e3442032c, closing socket connection and attempting reconnect
2016-10-07 03:01:02,898 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x3572d7e3c28020c, likely server has closed socket, closing socket connection and attempting reconnect
2016-10-07 03:01:02,902 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x3572d7e3c28020b, likely server has closed socket, closing socket connection and attempting reconnect
2016-10-07 03:01:02,902 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst3.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x2572d7e364901d1, likely server has closed socket, closing socket connection and attempting reconnect
2016-10-07 03:01:03,096 INFO [main-SendThread(imp1tvhdpmst4.corp.test.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-10-07 03:01:03,096 INFO [main-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server imp1tvhdpmst4.corp.test.com/172.24.125.133:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-10-07 03:01:03,096 INFO [main-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Socket connection established to imp1tvhdpmst4.corp.test.com/172.24.125.133:2181, initiating session
2016-10-07 03:01:03,100 INFO [main-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x1572d7e3442032c has expired, closing socket connection
2016-10-07 03:01:03,100 FATAL [main-EventThread] master.HMaster: Master server abort: loaded coprocessors are: [org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor]
2016-10-07 03:01:03,101 FATAL [main-EventThread] master.HMaster: master:16000-0x1572d7e3442032c, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure master:16000-0x1572d7e3442032c received expired from ZooKeeper, aborting
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:03,102 INFO [main-EventThread] regionserver.HRegionServer: STOPPED: master:16000-0x1572d7e3442032c, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure master:16000-0x1572d7e3442032c received expired from ZooKeeper, aborting
2016-10-07 03:01:03,102 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-10-07 03:01:03,102 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] util.Sleeper: We slept 63817ms instead of 3000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2016-10-07 03:01:03,102 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] regionserver.HRegionServer: Stopping infoServer
2016-10-07 03:01:03,124 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:16010
2016-10-07 03:01:03,174 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst3.corp.test.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-10-07 03:01:03,174 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst3.corp.test.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server imp1tvhdpmst3.corp.test.com/172.24.125.132:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-10-07 03:01:03,175 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst3.corp.test.com:2181)] zookeeper.ClientCnxn: Socket connection established to imp1tvhdpmst3.corp.test.com/172.24.125.132:2181, initiating session
2016-10-07 03:01:03,176 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst3.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x3572d7e3c28020c has expired, closing socket connection
2016-10-07 03:01:03,176 WARN [imp1tvhdpmst1:16000.activeMasterManager-EventThread] client.ConnectionManager$HConnectionImplementation: This client just lost it's session with ZooKeeper, closing it. It will be recreated next time someone needs it
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:03,177 INFO [imp1tvhdpmst1:16000.activeMasterManager-EventThread] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3572d7e3c28020c
2016-10-07 03:01:03,177 INFO [imp1tvhdpmst1:16000.activeMasterManager-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-10-07 03:01:03,224 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] procedure2.ProcedureExecutor: Stopping the procedure executor
2016-10-07 03:01:03,225 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] wal.WALProcedureStore: Stopping the WAL Procedure Store
2016-10-07 03:01:03,272 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] regionserver.HRegionServer: stopping server imp1tvhdpmst1.corp.test.com,16000,1475152605958
2016-10-07 03:01:03,272 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3572d7e3c28020b
2016-10-07 03:01:03,452 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst4.corp.test.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-10-07 03:01:03,452 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server imp1tvhdpmst4.corp.test.com/172.24.125.133:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-10-07 03:01:03,452 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Socket connection established to imp1tvhdpmst4.corp.test.com/172.24.125.133:2181, initiating session
2016-10-07 03:01:03,454 INFO [imp1tvhdpmst1:16000.activeMasterManager-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x2572d7e364901d1 has expired, closing socket connection
2016-10-07 03:01:03,454 INFO [imp1tvhdpmst1:16000.activeMasterManager-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-10-07 03:01:03,499 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.ZooKeeper: Session: 0x3572d7e3c28020b closed
2016-10-07 03:01:03,499 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] regionserver.HRegionServer: stopping server imp1tvhdpmst1.corp.test.com,16000,1475152605958; all regions closed.
2016-10-07 03:01:03,499 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-10-07 03:01:03,500 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] hbase.ChoreService: Chore service for: imp1tvhdpmst1.corp.test.com,16000,1475152605958 had [[ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: imp1tvhdpmst1.corp.test.com,16000,1475152605958-BalancerChore Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: CatalogJanitor-imp1tvhdpmst1:16000 Period: 300000 Unit: MILLISECONDS], [ScheduledChore: Name: imp1tvhdpmst1.corp.test.com,16000,1475152605958-RegionNormalizerChore Period: 1800000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: imp1tvhdpmst1.corp.test.com,16000,1475152605958-ClusterStatusChore Period: 60000 Unit: MILLISECONDS]] on shutdown
2016-10-07 03:01:03,501 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
2016-10-07 03:01:03,635 ERROR [PriorityRpcServer.handler=8,queue=0,port=16000] master.MasterRpcServices: Region server imp1tvhdpslv4.corp.test.com,16020,1475152621893 reported a fatal error:
ABORTING region server imp1tvhdpslv4.corp.test.com,16020,1475152621893: regionserver:16020-0x1572d7e3442032e, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure regionserver:16020-0x1572d7e3442032e received expired from ZooKeeper, aborting
Cause:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:03,635 ERROR [PriorityRpcServer.handler=10,queue=0,port=16000] master.MasterRpcServices: Region server imp1tvhdpslv1.corp.test.com,16020,1475152617424 reported a fatal error:
ABORTING region server imp1tvhdpslv1.corp.test.com,16020,1475152617424: regionserver:16020-0x3572d7e3c28020d, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure regionserver:16020-0x3572d7e3c28020d received expired from ZooKeeper, aborting
Cause:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:03,867 INFO [imp1tvhdpmst1.corp.test.com,16000,1475152605958_splitLogManager__ChoreService_1] master.SplitLogManager$TimeoutMonitor: Chore: SplitLogManager Timeout Monitor was stopped
2016-10-07 03:01:04,501 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
2016-10-07 03:01:06,502 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
2016-10-07 03:01:10,509 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
2016-10-07 03:01:18,032 ERROR [PriorityRpcServer.handler=13,queue=1,port=16000] master.MasterRpcServices: Region server imp1tvhdpslv2.corp.test.com,16020,1475152618858 reported a fatal error:
ABORTING region server imp1tvhdpslv2.corp.test.com,16020,1475152618858: Get list of registered region servers
Cause:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/rs
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:454)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:482)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl.getRegisteredRegionServers(ReplicationTrackerZKImpl.java:245)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl.refreshOtherRegionServersList(ReplicationTrackerZKImpl.java:226)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl.access$400(ReplicationTrackerZKImpl.java:42)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher.refreshListIfRightPath(ReplicationTrackerZKImpl.java:142)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher.nodeDeleted(ReplicationTrackerZKImpl.java:117)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:539)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:18,033 ERROR [PriorityRpcServer.handler=11,queue=1,port=16000] master.MasterRpcServices: Region server imp1tvhdpslv3.corp.test.com,16020,1475152617489 reported a fatal error:
ABORTING region server imp1tvhdpslv3.corp.test.com,16020,1475152617489: Get list of registered region servers
Cause:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/rs
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:454)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:482)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl.getRegisteredRegionServers(ReplicationTrackerZKImpl.java:245)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl.refreshOtherRegionServersList(ReplicationTrackerZKImpl.java:226)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl.access$400(ReplicationTrackerZKImpl.java:42)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher.refreshListIfRightPath(ReplicationTrackerZKImpl.java:142)
at org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher.nodeDeleted(ReplicationTrackerZKImpl.java:117)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:539)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:18,336 ERROR [PriorityRpcServer.handler=7,queue=1,port=16000] master.MasterRpcServices: Region server imp1tvhdpslv2.corp.test.com,16020,1475152618858 reported a fatal error:
ABORTING region server imp1tvhdpslv2.corp.test.com,16020,1475152618858: regionserver:16020-0x2572d7e364901d4, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure regionserver:16020-0x2572d7e364901d4 received expired from ZooKeeper, aborting
Cause:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:18,372 ERROR [PriorityRpcServer.handler=6,queue=0,port=16000] master.MasterRpcServices: Region server imp1tvhdpslv3.corp.test.com,16020,1475152617489 reported a fatal error:
ABORTING region server imp1tvhdpslv3.corp.test.com,16020,1475152617489: regionserver:16020-0x1572d7e3442032d, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure regionserver:16020-0x1572d7e3442032d received expired from ZooKeeper, aborting
Cause:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:18,509 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
2016-10-07 03:01:18,510 ERROR [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2016-10-07 03:01:18,510 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.ZKUtil: master:16000-0x1572d7e3442032c, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure Unable to get data of znode /hbase-secure/master
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:621)
at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:148)
at org.apache.hadoop.hbase.master.ActiveMasterManager.stop(ActiveMasterManager.java:267)
at org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1175)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1071)
at java.lang.Thread.run(Thread.java:745)
2016-10-07 03:01:18,510 ERROR [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.ZooKeeperWatcher: master:16000-0x1572d7e3442032c, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:621)
at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:148)
at org.apache.hadoop.hbase.master.ActiveMasterManager.stop(ActiveMasterManager.java:267)
at org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1175)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1071)
at java.lang.Thread.run(Thread.java:745)
2016-10-07 03:01:18,510 ERROR [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] master.ActiveMasterManager: master:16000-0x1572d7e3442032c, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure Error deleting our own master address node
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:621)
at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:148)
at org.apache.hadoop.hbase.master.ActiveMasterManager.stop(ActiveMasterManager.java:267)
at org.apache.hadoop.hbase.master.HMaster.stopServiceThreads(HMaster.java:1175)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1071)
at java.lang.Thread.run(Thread.java:745)
2016-10-07 03:01:18,512 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] hbase.ChoreService: Chore service for: imp1tvhdpmst1.corp.test.com,16000,1475152605958_splitLogManager_ had [] on shutdown
2016-10-07 03:01:18,512 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] flush.MasterFlushTableProcedureManager: stop: server shutting down.
2016-10-07 03:01:18,512 INFO [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] ipc.RpcServer: Stopping server on 16000
2016-10-07 03:01:18,512 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/tokenauth/keymaster
2016-10-07 03:01:19,512 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/tokenauth/keymaster
2016-10-07 03:01:21,512 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase-secure/tokenauth/keymaster
2016-10-07 03:01:23,132 WARN [LeaseRenewer:hbase@imp1tvhdpmst3.corp.test.com:8020] hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_-1572577966_107] for 30 seconds. Will retry shortly ...
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4534)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:1089)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:660)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
at org.apache.hadoop.ipc.Client.call(Client.java:1426)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy19.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:592)
at sun.reflect.GeneratedMethodAccessor124.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy20.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:892)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:745)
2016-10-07 03:01:24,135 WARN [LeaseRenewer:hbase@imp1tvhdpmst3.corp.test.com:8020] hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_-1572577966_107] for 31 seconds. Will retry shortly ...
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4534)
Regionserver Log:
2016-10-07 03:01:02,894 INFO [main-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 71395ms for sessionid 0x3572d7e3c28020d, closing socket connection and attempting reconnect
2016-10-07 03:01:02,899 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020-SendThread(imp1tvhdpmst3.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x2572d7e364901d2, likely server has closed socket, closing socket connection and attempting reconnect
2016-10-07 03:01:03,603 INFO [main-SendThread(imp1tvhdpmst2.corp.test.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-10-07 03:01:03,603 INFO [main-SendThread(imp1tvhdpmst2.corp.test.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server imp1tvhdpmst2.corp.test.com/172.24.125.131:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-10-07 03:01:03,604 INFO [main-SendThread(imp1tvhdpmst2.corp.test.com:2181)] zookeeper.ClientCnxn: Socket connection established to imp1tvhdpmst2.corp.test.com/172.24.125.131:2181, initiating session
2016-10-07 03:01:03,608 INFO [main-SendThread(imp1tvhdpmst2.corp.test.com:2181)] zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x3572d7e3c28020d has expired, closing socket connection
2016-10-07 03:01:03,608 FATAL [main-EventThread] regionserver.HRegionServer: ABORTING region server imp1tvhdpslv1.corp.test.com,16020,1475152617424: regionserver:16020-0x3572d7e3c28020d, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure regionserver:16020-0x3572d7e3c28020d received expired from ZooKeeper, aborting
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2016-10-07 03:01:03,608 FATAL [main-EventThread] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint, org.apache.hadoop.hbase.security.token.TokenProvider, org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint, org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor]
2016-10-07 03:01:03,623 INFO [main-EventThread] regionserver.HRegionServer: Dump of metrics as JSON on abort: {
"beans" : [ {
"name" : "java.lang:type=Memory",
"modelerType" : "sun.management.MemoryImpl",
"NonHeapMemoryUsage" : {
"committed" : 142868480,
"init" : 136773632,
"max" : 184549376,
"used" : 63971032
},
"ObjectPendingFinalizationCount" : 0,
"Verbose" : true,
"HeapMemoryUsage" : {
"committed" : 8536260608,
"init" : 8589934592,
"max" : 8536260608,
"used" : 795318008
},
"ObjectName" : "java.lang:type=Memory"
} ],
"beans" : [ {
"name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC",
"modelerType" : "RegionServer,sub=IPC",
"tag.Context" : "regionserver",
"tag.Hostname" : "imp1tvhdpslv1.corp.test.com",
"queueSize" : 0,
"numCallsInGeneralQueue" : 0,
"numCallsInReplicationQueue" : 0,
"numCallsInPriorityQueue" : 0,
"numOpenConnections" : 0,
"numActiveHandler" : 0,
"TotalCallTime_num_ops" : 6766,
"TotalCallTime_min" : 0,
"TotalCallTime_max" : 4398,
"TotalCallTime_mean" : 22.693319538870824,
"TotalCallTime_median" : 0.0,
"TotalCallTime_75th_percentile" : 1.0,
"TotalCallTime_95th_percentile" : 85.0,
"TotalCallTime_99th_percentile" : 251.58999999999992,
"exceptions.FailedSanityCheckException" : 0,
"exceptions.RegionMovedException" : 0,
"QueueCallTime_num_ops" : 6766,
"QueueCallTime_min" : 0,
"QueueCallTime_max" : 25,
"QueueCallTime_mean" : 0.11409991132131245,
"QueueCallTime_median" : 0.0,
"QueueCallTime_75th_percentile" : 0.0,
"QueueCallTime_95th_percentile" : 1.0,
"QueueCallTime_99th_percentile" : 1.0,
"authenticationFailures" : 0,
"authorizationFailures" : 0,
"exceptions" : 6,
"authenticationSuccesses" : 74,
"authorizationSuccesses" : 74,
"ProcessCallTime_num_ops" : 6766,
"ProcessCallTime_min" : 0,
"ProcessCallTime_max" : 4398,
"ProcessCallTime_mean" : 22.579219627549513,
"ProcessCallTime_median" : 0.0,
"ProcessCallTime_75th_percentile" : 1.0,
"ProcessCallTime_95th_percentile" : 85.0,
"ProcessCallTime_99th_percentile" : 251.58999999999992,
"exceptions.NotServingRegionException" : 6,
"sentBytes" : 82090551,
"exceptions.RegionTooBusyException" : 0,
"receivedBytes" : 205125313,
"exceptions.OutOfOrderScannerNextException" : 0,
"exceptions.UnknownScannerException" : 0
} ],
"beans" : [ {
"name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication",
"modelerType" : "RegionServer,sub=Replication",
"tag.Context" : "regionserver",
"tag.Hostname" : "imp1tvhdpslv1.corp.test.com",
"sink.appliedOps" : 0,
"sink.appliedBatches" : 0,
"sink.ageOfLastAppliedOp" : 0
} ],
"beans" : [ {
"name" : "Hadoop:service=HBase,name=RegionServer,sub=Server",
"modelerType" : "RegionServer,sub=Server",
"tag.zookeeperQuorum" : "imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181",
"tag.serverName" : "imp1tvhdpslv1.corp.test.com,16020,1475152617424",
"tag.clusterId" : "af4af9f4-724e-42a0-9b08-c3ec0b260656",
"tag.Context" : "regionserver",
"tag.Hostname" : "imp1tvhdpslv1.corp.test.com",
"regionCount" : 12,
"storeCount" : 12,
"hlogFileCount" : 2,
"hlogFileSize" : 0,
"storeFileCount" : 11,
"memStoreSize" : 5088,
"storeFileSize" : 22766460359,
"regionServerStartTime" : 1475152617424,
"totalRequestCount" : 3028580,
"readRequestCount" : 6564,
"writeRequestCount" : 252,
"checkMutateFailedCount" : 0,
"checkMutatePassedCount" : 0,
"storeFileIndexSize" : 17472,
"staticIndexSize" : 19363382,
"staticBloomSize" : 271565340,
"mutationsWithoutWALCount" : 0,
"mutationsWithoutWALSize" : 0,
"percentFilesLocal" : 66,
"percentFilesLocalSecondaryRegions" : 0,
"splitQueueLength" : 0,
"compactionQueueLength" : 0,
"flushQueueLength" : 0,
"blockCacheFreeSize" : 3410842632,
"blockCacheCount" : 3,
"blockCacheSize" : 3661560,
"blockCacheHitCount" : 3171629,
"blockCacheHitCountPrimary" : 3171629,
"blockCacheMissCount" : 51343,
"blockCacheMissCountPrimary" : 51343,
"blockCacheEvictionCount" : 4600,
"blockCacheEvictionCountPrimary" : 4600,
"blockCacheCountHitPercent" : 98.0,
"blockCacheExpressHitPercent" : 99,
"updatesBlockedTime" : 0,
"flushedCellsCount" : 252,
"compactedCellsCount" : 0,
"majorCompactedCellsCount" : 0,
"flushedCellsSize" : 57768,
"compactedCellsSize" : 0,
"majorCompactedCellsSize" : 0,
"blockedRequestCount" : 0,
"splitSuccessCount" : 1,
"splitRequestCount" : 1,
"Append_num_ops" : 0,
"Append_min" : 0,
"Append_max" : 0,
"Append_mean" : 0.0,
"Append_median" : 0.0,
"Append_75th_percentile" : 0.0,
"Append_95th_percentile" : 0.0,
"Append_99th_percentile" : 0.0,
"Delete_num_ops" : 0,
"Delete_min" : 0,
"Delete_max" : 0,
"Delete_mean" : 0.0,
"Delete_median" : 0.0,
"Delete_75th_percentile" : 0.0,
"Delete_95th_percentile" : 0.0,
"Delete_99th_percentile" : 0.0,
"Mutate_num_ops" : 583,
"Mutate_min" : 1,
"Mutate_max" : 412,
"Mutate_mean" : 61.06003430531732,
"Mutate_median" : 79.0,
"Mutate_75th_percentile" : 84.0,
"Mutate_95th_percentile" : 110.0,
"Mutate_99th_percentile" : 126.6,
"ScanNext_num_ops" : 21,
"ScanNext_min" : 0,
"ScanNext_max" : 31287,
"ScanNext_mean" : 16041.0,
"ScanNext_median" : 9034.0,
"ScanNext_75th_percentile" : 12580.0,
"ScanNext_95th_percentile" : 12680.0,
"ScanNext_99th_percentile" : 12680.0,
"slowDeleteCount" : 0,
"slowIncrementCount" : 0,
"FlushTime_num_ops" : 3,
"FlushTime_min" : 1239,
"FlushTime_max" : 21322,
"FlushTime_mean" : 13931.333333333334,
"FlushTime_median" : 19233.0,
"FlushTime_75th_percentile" : 21322.0,
"FlushTime_95th_percentile" : 21322.0,
"FlushTime_99th_percentile" : 21322.0,
"Get_num_ops" : 5372,
"Get_min" : 0,
"Get_max" : 47,
"Get_mean" : 0.22412509307520476,
"Get_median" : 0.0,
"Get_75th_percentile" : 0.0,
"Get_95th_percentile" : 1.0,
"Get_99th_percentile" : 1.0,
"Replay_num_ops" : 0,
"Replay_min" : 0,
"Replay_max" : 0,
"Replay_mean" : 0.0,
"Replay_median" : 0.0,
"Replay_75th_percentile" : 0.0,
"Replay_95th_percentile" : 0.0,
"Replay_99th_percentile" : 0.0,
"slowGetCount" : 0,
"slowAppendCount" : 0,
"slowPutCount" : 0,
"SplitTime_num_ops" : 1,
"SplitTime_min" : 4334,
"SplitTime_max" : 4334,
"SplitTime_mean" : 4334.0,
"SplitTime_median" : 4334.0,
"SplitTime_75th_percentile" : 4334.0,
"SplitTime_95th_percentile" : 4334.0,
"SplitTime_99th_percentile" : 4334.0,
"Increment_num_ops" : 0,
"Increment_min" : 0,
"Increment_max" : 0,
"Increment_mean" : 0.0,
"Increment_median" : 0.0,
"Increment_75th_percentile" : 0.0,
"Increment_95th_percentile" : 0.0,
"Increment_99th_percentile" : 0.0
} ]
}
2016-10-07 03:01:03,638 INFO [main-EventThread] regionserver.HRegionServer: STOPPED: regionserver:16020-0x3572d7e3c28020d, quorum=imp1tvhdpmst2.corp.test.com:2181,imp1tvhdpmst3.corp.test.com:2181,imp1tvhdpmst4.corp.test.com:2181, baseZNode=/hbase-secure regionserver:16020-0x3572d7e3c28020d received expired from ZooKeeper, aborting
2016-10-07 03:01:03,638 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-10-07 03:01:03,638 WARN [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] util.Sleeper: We slept 62101ms instead of 3000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2016-10-07 03:01:03,639 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0] provider.BaseAuditHandler: Audit Status Log: name=hbaseRegional.async.summary.multi_dest.batch, finalDestination=hbaseRegional.async.summary.multi_dest.batch.hdfs, interval=01:03.866 minutes, events=1, totalEvents=585, totalSuccessCount=250
2016-10-07 03:01:03,639 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1] provider.BaseAuditHandler: Audit Status Log: name=hbaseRegional.async.summary.multi_dest.batch, finalDestination=hbaseRegional.async.summary.multi_dest.batch.solr, interval=01:03.860 minutes, events=1, totalEvents=552, totalSuccessCount=250
2016-10-07 03:01:03,639 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2016-10-07 03:01:03,640 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.HRegionServer: Stopping infoServer
2016-10-07 03:01:03,640 INFO [SplitLogWorker-imp1tvhdpslv1:16020] regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
2016-10-07 03:01:03,640 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1] provider.BaseAuditHandler: Audit Status Log: name=hbaseRegional.async.summary.multi_dest.batch.solr, interval=08:47:58.911 hours, events=2, succcessCount=2, totalEvents=250, totalSuccessCount=250
2016-10-07 03:01:03,640 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0] provider.BaseAuditHandler: Audit Status Log: name=hbaseRegional.async.summary.multi_dest.batch.hdfs, interval=08:47:58.893 hours, events=2, succcessCount=2, totalEvents=250, totalSuccessCount=250
2016-10-07 03:01:03,640 INFO [SplitLogWorker-imp1tvhdpslv1:16020] regionserver.SplitLogWorker: SplitLogWorker imp1tvhdpslv1.corp.test.com,16020,1475152617424 exiting
2016-10-07 03:01:03,642 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:16030
2016-10-07 03:01:03,743 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.HeapMemoryManager: Stoping HeapMemoryTuner chore.
2016-10-07 03:01:03,744 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly.
2016-10-07 03:01:03,744 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2016-10-07 03:01:03,744 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
2016-10-07 03:01:03,744 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] flush.RegionServerFlushTableProcedureManager: Stopping region server flush procedure manager abruptly.
2016-10-07 03:01:03,749 INFO [StoreCloserThread-ambarismoketest,,1466187475382.7ee7f2de55e70e37f2fb23624f9e593e.-1] regionserver.HStore: Closed family
2016-10-07 03:01:03,750 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.HRegionServer: aborting server imp1tvhdpslv1.corp.test.com,16020,1475152617424
2016-10-07 03:01:03,750 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-2] regionserver.HRegion: Closed ambarismoketest,,1466187475382.7ee7f2de55e70e37f2fb23624f9e593e.
2016-10-07 03:01:03,750 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x2572d7e364901d2
2016-10-07 03:01:03,750 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020-SendThread(imp1tvhdpmst4.corp.test.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-10-07 03:01:03,752 INFO [StoreCloserThread-du_secure_vault_tst_jcb,,1475653102504.d6e8b646cabc9a73fee39da3f71afcc9.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,752 INFO [StoreCloserThread-du_secure_vault_onedaydata_dq,contract_main_resource_num_token_99445B2,1473230358405.57a88265f314ba4a9ff713deef241c37.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,753 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server imp1tvhdpmst4.corp.test.com/172.24.125.133:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-10-07 03:01:03,753 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-1] regionserver.HRegion: Closed du_secure_vault_tst_jcb,,1475653102504.d6e8b646cabc9a73fee39da3f71afcc9.
2016-10-07 03:01:03,753 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020-SendThread(imp1tvhdpmst4.corp.test.com:2181)] zookeeper.ClientCnxn: Socket connection established to imp1tvhdpmst4.corp.test.com/172.24.125.133:2181, initiating session
2016-10-07 03:01:03,755 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-0] regionserver.HRegion: Closed du_secure_vault_onedaydata_dq,contract_main_resource_num_token_99445B2,1473230358405.57a88265f314ba4a9ff713deef241c37.
2016-10-07 03:01:03,755 INFO [StoreCloserThread-sun_test,,1464509020983.12c9c4b8684478c63917f0253bf930b8.-1] regionserver.HStore: Closed cf
2016-10-07 03:01:03,755 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-2] regionserver.HRegion: Closed sun_test,,1464509020983.12c9c4b8684478c63917f0253bf930b8.
2016-10-07 03:01:03,757 INFO [StoreCloserThread-du_secure_vault_tst_may,,1475763178137.0d211090290d0cac9aceb17d29c2e05a.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,757 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-0] regionserver.HRegion: Closed du_secure_vault_tst_may,,1475763178137.0d211090290d0cac9aceb17d29c2e05a.
2016-10-07 03:01:03,758 INFO [StoreCloserThread-du_hbase_krb_tst,,1465128507241.11f2c36247ad6f3348ca4013eb99daad.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,758 INFO [StoreCloserThread-du_secure_vault_onedaydata_dq,msisdn_raw_key_97152680792,1472729475904.00d916acbfa9044e067b9b6b9ec0e601.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,759 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-1] regionserver.HRegion: Closed du_hbase_krb_tst,,1465128507241.11f2c36247ad6f3348ca4013eb99daad.
2016-10-07 03:01:03,759 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-2] regionserver.HRegion: Closed du_secure_vault_onedaydata_dq,msisdn_raw_key_97152680792,1472729475904.00d916acbfa9044e067b9b6b9ec0e601.
2016-10-07 03:01:03,762 INFO [StoreCloserThread-du_secure_vault_onedaydata,callingnormalized_key_97155790835,1470233185954.293f370188569fa959fe704c6eca64e8.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,762 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-0] regionserver.HRegion: Closed du_secure_vault_onedaydata,callingnormalized_key_97155790835,1470233185954.293f370188569fa959fe704c6eca64e8.
2016-10-07 03:01:03,763 INFO [StoreCloserThread-du_secure_vault_onedaydata,contract_storage_medium_num_token_849867732,1470235487700.89859097282ea3fb2868bc90ded9653a.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,764 INFO [StoreCloserThread-du_secure_vault_onedaydata,imsi_key_42403021011613,1468488066776.0c2171e0aaba28588914a58f5f70152e.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,764 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-1] regionserver.HRegion: Closed du_secure_vault_onedaydata,contract_storage_medium_num_token_849867732,1470235487700.89859097282ea3fb2868bc90ded9653a.
2016-10-07 03:01:03,765 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-2] regionserver.HRegion: Closed du_secure_vault_onedaydata,imsi_key_42403021011613,1468488066776.0c2171e0aaba28588914a58f5f70152e.
2016-10-07 03:01:03,767 INFO [StoreCloserThread-du_secure_vault_onedaydata_dq,cust_id_token_5568245463,1474068738847.8110480ef3807a8e15f7508a979f6fe8.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,767 INFO [StoreCloserThread-du_secure_vault,,1465883027720.d506c1c78844ea16909341ca618e18e3.-1] regionserver.HStore: Closed vault
2016-10-07 03:01:03,767 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-0] regionserver.HRegion: Closed du_secure_vault_onedaydata_dq,cust_id_token_5568245463,1474068738847.8110480ef3807a8e15f7508a979f6fe8.
2016-10-07 03:01:03,768 INFO [RS_CLOSE_REGION-imp1tvhdpslv1:16020-1] regionserver.HRegion: Closed du_secure_vault,,1465883027720.d506c1c78844ea16909341ca618e18e3.
2016-10-07 03:01:03,857 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] zookeeper.ZooKeeper: Session: 0x2572d7e364901d2 closed
2016-10-07 03:01:03,857 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-10-07 03:01:03,857 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.HRegionServer: stopping server imp1tvhdpslv1.corp.test.com,16020,1475152617424; all regions closed.
2016-10-07 03:01:03,895 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.Leases: regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020 closing leases
2016-10-07 03:01:03,895 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.Leases: regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020 closed leases
2016-10-07 03:01:03,896 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] hbase.ChoreService: Chore service for: imp1tvhdpslv1.corp.test.com,16020,1475152617424 had [[ScheduledChore: Name: MovedRegionsCleaner for region imp1tvhdpslv1.corp.test.com,16020,1475152617424 Period: 120000 Unit: MILLISECONDS], [ScheduledChore: Name: imp1tvhdpslv1.corp.test.com,16020,1475152617424-MemstoreFlusherChore Period: 10000 Unit: MILLISECONDS]] on shutdown
2016-10-07 03:01:08,678 INFO [RS_OPEN_META-imp1tvhdpslv1:16020-0-MetaLogRoller] regionserver.LogRoller: LogRoller exiting.
2016-10-07 03:01:08,679 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020.logRoller] regionserver.LogRoller: LogRoller exiting.
2016-10-07 03:01:08,680 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.CompactSplitThread: Waiting for Split Thread to finish...
2016-10-07 03:01:08,680 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish...
2016-10-07 03:01:08,680 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish...
2016-10-07 03:01:08,680 INFO [regionserver/imp1tvhdpslv1.corp.test.com/172.24.125.134:16020] region
... View more
Labels:
- Labels:
-
Apache HBase