Member since
01-06-2016
54
Posts
15
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8130 | 06-24-2016 06:18 AM | |
826 | 03-18-2016 12:40 PM | |
6459 | 03-18-2016 06:28 AM | |
4632 | 03-08-2016 10:02 AM |
06-23-2016
07:37 AM
Hi, Datanode getting down for following reason. Can you please tell me the root cause and resolution- 2016-05-31 06:38:45,807 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(295)) - Deleted BP-838165258-10.1.1.94-1459246457024 blk_1073790458_49647 file /var/log/hadoop/hdfs/data/current/BP-838165258-10.1.1.94-1459246457024/current/finalized/subdir0/subdir189/blk_1073790458
2016-05-31 06:38:45,808 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(295)) - Deleted BP-838165258-10.1.1.94-1459246457024 blk_1073790460_49649 file /var/log/hadoop/hdfs/data/current/BP-838165258-10.1.1.94-1459246457024/current/finalized/subdir0/subdir189/blk_1073790460
2016-05-31 06:38:50,917 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790961_50150 src: /10.1.1.30:56265 dest: /10.1.1.29:50010
2016-05-31 06:38:50,987 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56265, dest: /10.1.1.29:50010, bytes: 4688706, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790961_50150, duration: 61792605
2016-05-31 06:38:50,988 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790961_50150, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:39:17,899 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,900 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 1, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,901 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 2, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,902 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 3, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,903 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 4, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,904 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 5, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,904 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 6, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,905 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 7, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,905 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 8, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,907 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 9, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 10, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 11, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 12, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 13, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,909 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 14, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,909 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 15, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,909 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 16, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:23,630 ERROR datanode.DataNode (DataXceiver.java:run(278)) - data1.corp.mirrorplus.com:50010:DataXceiver error processing unknown operation src: /127.0.0.1:43209 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:745)
2016-05-31 06:39:30,882 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790962_50151 src: /10.1.1.30:56392 dest: /10.1.1.29:50010
2016-05-31 06:39:30,902 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56169, dest: /10.1.1.29:50010, bytes: 130970563, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790960_50149, duration: 59970347965
2016-05-31 06:39:30,902 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790960_50149, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790497_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790499_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790505_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790513_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790515_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790523_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790525_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790527_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,499 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790529_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,499 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790531_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:56,902 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790963_50152 src: /10.1.1.30:56483 dest: /10.1.1.29:50010
2016-05-31 06:39:56,968 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56483, dest: /10.1.1.29:50010, bytes: 4694274, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790963_50152, duration: 61182280
2016-05-31 06:39:56,968 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790963_50152, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:39:57,014 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790964_50153 src: /10.1.1.30:56488 dest: /10.1.1.29:50010
2016-05-31 06:39:57,438 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56488, dest: /10.1.1.29:50010, bytes: 31717025, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790964_50153, duration: 420854449
2016-05-31 06:39:57,438 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790964_50153, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:40:23,622 ERROR datanode.DataNode (DataXceiver.java:run(278)) - data1.corp.mirrorplus.com:50010:DataXceiver error processing unknown operation src: /127.0.0.1:43354 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:745)
2016-05-31 06:40:30,505 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790484_0 on volume /var/log/hadoop/hdfs/data
2016-05-31 06:40:30,505 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790492_0 on volume /var/log/hadoop/hdfs/data
... View more
Labels:
- Labels:
-
Apache Hadoop
05-26-2016
06:18 PM
Below is server log (hbase master log)- 2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t24] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t40] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,029 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,049 DEBUG [AM.ZK.Worker-pool2-t25] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=fc6e430b85da20d6ede2cd47a8288519, current_state={fc6e430b85da20d6ede2cd47a8288519 state=PENDING_OPEN, ts=1464282656677, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,049 INFO [AM.ZK.Worker-pool2-t25] master.RegionStates: Transitioned {fc6e430b85da20d6ede2cd47a8288519 state=PENDING_OPEN, ts=1464282656677, server=fsdata1c.corp.arc.com,60020,1464282355492} to {fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Transitioned {047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492} to {047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 INFO [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492} to {87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,162 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling RS_ZK_REGION_CLOSED, server=fsdata2c.corp.arc.com,60020,1464282352148, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_CLOSE, ts=1464282656595, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,162 DEBUG [AM.ZK.Worker-pool2-t30] handler.ClosedRegionHandler: Handling CLOSED event for 44e206d15b62ed4d452545242bd105cd
2016-05-26 17:10:57,162 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=PENDING_CLOSE, ts=1464282656595, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=CLOSED, ts=1464282657162, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Found an existing plan for CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. destination server is fsdata1c.corp.arc.com,60020,1464282355492 accepted as a dest server = true
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Using pre-existing plan for CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd.; plan=hri=CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd., src=fsdata2c.corp.arc.com,60020,1464282352148, dest=fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,163 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=CLOSED, ts=1464282657162, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=OFFLINE, ts=1464282657163, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Creating (or updating) unassigned node 44e206d15b62ed4d452545242bd105cd with OFFLINE state
2016-05-26 17:10:57,167 INFO [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Assigning CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. to fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,167 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=OFFLINE, ts=1464282657163, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,176 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Transitioned {047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492} to {047c1d0ed29a28ddb22b3cbbcb787675 state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t31] handler.OpenedRegionHandler: Handling OPENED of 047c1d0ed29a28ddb22b3cbbcb787675 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t27] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 INFO [AM.ZK.Worker-pool2-t27] master.RegionStates: Transitioned {87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492} to {87df75ff7669a91c17cd903d5a9f3a3e state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t27] handler.OpenedRegionHandler: Handling OPENED of 87df75ff7669a91c17cd903d5a9f3a3e from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,404 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=fc6e430b85da20d6ede2cd47a8288519, current_state={fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,404 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Transitioned {fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492} to {fc6e430b85da20d6ede2cd47a8288519 state=OPEN, ts=1464282657404, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,404 DEBUG [AM.ZK.Worker-pool2-t34] handler.OpenedRegionHandler: Handling OPENED of fc6e430b85da20d6ede2cd47a8288519 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,408 DEBUG [AM.ZK.Worker-pool2-t31] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 047c1d0ed29a28ddb22b3cbbcb787675 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,409 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Znode CUTOFF3,C31\x09166,1463559795389.047c1d0ed29a28ddb22b3cbbcb787675. deleted, state: {047c1d0ed29a28ddb22b3cbbcb787675 state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,410 DEBUG [AM.ZK.Worker-pool2-t34] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node fc6e430b85da20d6ede2cd47a8288519 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Onlined 047c1d0ed29a28ddb22b3cbbcb787675 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Offlined 047c1d0ed29a28ddb22b3cbbcb787675 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,409 DEBUG [AM.ZK.Worker-pool2-t27] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 87df75ff7669a91c17cd903d5a9f3a3e in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,411 DEBUG [AM.ZK.Worker-pool2-t27] master.AssignmentManager: Znode MON,O11\x09154548\x093\x09154524183,1456831930257.87df75ff7669a91c17cd903d5a9f3a3e. deleted, state: {87df75ff7669a91c17cd903d5a9f3a3e state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t27] master.RegionStates: Onlined 87df75ff7669a91c17cd903d5a9f3a3e on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t27] master.RegionStates: Offlined 87df75ff7669a91c17cd903d5a9f3a3e from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,411 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Znode MON,O11\x09154769\x093\x09151995813,1443695471549.fc6e430b85da20d6ede2cd47a8288519. deleted, state: {fc6e430b85da20d6ede2cd47a8288519 state=OPEN, ts=1464282657404, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Onlined fc6e430b85da20d6ede2cd47a8288519 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Offlined fc6e430b85da20d6ede2cd47a8288519 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,415 DEBUG [AM.ZK.Worker-pool2-t23] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 INFO [AM.ZK.Worker-pool2-t23] master.RegionStates: Transitioned {ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492} to {ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 DEBUG [AM.ZK.Worker-pool2-t36] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=016ed857886cbcc5eb088b069484218e, current_state={016ed857886cbcc5eb088b069484218e state=PENDING_OPEN, ts=1464282656633, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 INFO [AM.ZK.Worker-pool2-t36] master.RegionStates: Transitioned {016ed857886cbcc5eb088b069484218e state=PENDING_OPEN, ts=1464282656633, server=fsdata1c.corp.arc.com,60020,1464282355492} to {016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,416 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,416 INFO [AM.ZK.Worker-pool2-t33] master.RegionStates: Transitioned {af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492} to {af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=016ed857886cbcc5eb088b069484218e, current_state={016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Transitioned {016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492} to {016ed857886cbcc5eb088b069484218e state=OPEN, ts=1464282657508, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 DEBUG [AM.ZK.Worker-pool2-t21] handler.OpenedRegionHandler: Handling OPENED of 016ed857886cbcc5eb088b069484218e from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,512 DEBUG [AM.ZK.Worker-pool2-t21] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 016ed857886cbcc5eb088b069484218e in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,512 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Znode TSE,,1464258676776.016ed857886cbcc5eb088b069484218e. deleted, state: {016ed857886cbcc5eb088b069484218e state=OPEN, ts=1464282657508, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,512 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Onlined 016ed857886cbcc5eb088b069484218e on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,512 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Offlined 016ed857886cbcc5eb088b069484218e from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t37] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f5836191f2d1a9806269864db4287786, current_state={f5836191f2d1a9806269864db4287786 state=PENDING_OPEN, ts=1464282656656, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 INFO [AM.ZK.Worker-pool2-t37] master.RegionStates: Transitioned {f5836191f2d1a9806269864db4287786 state=PENDING_OPEN, ts=1464282656656, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 INFO [AM.ZK.Worker-pool2-t22] master.RegionStates: Transitioned {af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492} to {af3ed22bb54a2052eaca660332714462 state=OPEN, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t22] handler.OpenedRegionHandler: Handling OPENED of af3ed22bb54a2052eaca660332714462 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,522 DEBUG [AM.ZK.Worker-pool2-t22] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node af3ed22bb54a2052eaca660332714462 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,522 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Znode CUTOFF2,C31\x09164,1462813183940.af3ed22bb54a2052eaca660332714462. deleted, state: {af3ed22bb54a2052eaca660332714462 state=OPEN, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,522 INFO [AM.ZK.Worker-pool2-t22] master.RegionStates: Onlined af3ed22bb54a2052eaca660332714462 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,522 INFO [AM.ZK.Worker-pool2-t22] master.RegionStates: Offlined af3ed22bb54a2052eaca660332714462 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,526 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=a438c3fccb4dffce6c3f2fb2a217ff18, current_state={a438c3fccb4dffce6c3f2fb2a217ff18 state=PENDING_OPEN, ts=1464282656691, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,526 INFO [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {a438c3fccb4dffce6c3f2fb2a217ff18 state=PENDING_OPEN, ts=1464282656691, server=fsdata1c.corp.arc.com,60020,1464282355492} to {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,535 DEBUG [AM.ZK.Worker-pool2-t39] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,535 INFO [AM.ZK.Worker-pool2-t39] master.RegionStates: Transitioned {ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492} to {ff0af29a7f8e111fc4d46c7a30a17459 state=OPEN, ts=1464282657535, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,536 DEBUG [AM.ZK.Worker-pool2-t39] handler.OpenedRegionHandler: Handling OPENED of ff0af29a7f8e111fc4d46c7a30a17459 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,539 DEBUG [AM.ZK.Worker-pool2-t39] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node ff0af29a7f8e111fc4d46c7a30a17459 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,540 DEBUG [AM.ZK.Worker-pool2-t39] master.AssignmentManager: Znode MONE,O31\x09156336\x093\x09152045625,1463618251297.ff0af29a7f8e111fc4d46c7a30a17459. deleted, state: {ff0af29a7f8e111fc4d46c7a30a17459 state=OPEN, ts=1464282657535, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,540 INFO [AM.ZK.Worker-pool2-t39] master.RegionStates: Onlined ff0af29a7f8e111fc4d46c7a30a17459 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,540 INFO [AM.ZK.Worker-pool2-t39] master.RegionStates: Offlined ff0af29a7f8e111fc4d46c7a30a17459 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,543 DEBUG [AM.ZK.Worker-pool2-t26] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=1e9ca9e463ebd7af1fbd910bd2d570a6, current_state={1e9ca9e463ebd7af1fbd910bd2d570a6 state=PENDING_OPEN, ts=1464282656630, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,543 INFO [AM.ZK.Worker-pool2-t26] master.RegionStates: Transitioned {1e9ca9e463ebd7af1fbd910bd2d570a6 state=PENDING_OPEN, ts=1464282656630, server=fsdata1c.corp.arc.com,60020,1464282355492} to {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,570 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=a438c3fccb4dffce6c3f2fb2a217ff18, current_state={a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,570 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Transitioned {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492} to {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPEN, ts=1464282657570, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,571 DEBUG [AM.ZK.Worker-pool2-t31] handler.OpenedRegionHandler: Handling OPENED of a438c3fccb4dffce6c3f2fb2a217ff18 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,574 DEBUG [AM.ZK.Worker-pool2-t31] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node a438c3fccb4dffce6c3f2fb2a217ff18 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,574 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Znode CUTOFF1,,1463913041538.a438c3fccb4dffce6c3f2fb2a217ff18. deleted, state: {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPEN, ts=1464282657570, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,574 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Onlined a438c3fccb4dffce6c3f2fb2a217ff18 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,574 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Offlined a438c3fccb4dffce6c3f2fb2a217ff18 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,576 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=bd20eeb125be62c29b0e19960472e76d, current_state={bd20eeb125be62c29b0e19960472e76d state=PENDING_OPEN, ts=1464282656702, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,576 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Transitioned {bd20eeb125be62c29b0e19960472e76d state=PENDING_OPEN, ts=1464282656702, server=fsdata1c.corp.arc.com,60020,1464282355492} to {bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 DEBUG [AM.ZK.Worker-pool2-t35] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f5836191f2d1a9806269864db4287786, current_state={f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 INFO [AM.ZK.Worker-pool2-t35] master.RegionStates: Transitioned {f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f5836191f2d1a9806269864db4287786 state=OPEN, ts=1464282657584, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 DEBUG [AM.ZK.Worker-pool2-t35] handler.OpenedRegionHandler: Handling OPENED of f5836191f2d1a9806269864db4287786 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,587 DEBUG [AM.ZK.Worker-pool2-t35] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node f5836191f2d1a9806269864db4287786 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,587 DEBUG [AM.ZK.Worker-pool2-t35] master.AssignmentManager: Znode MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786. deleted, state: {f5836191f2d1a9806269864db4287786 state=OPEN, ts=1464282657584, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,587 INFO [AM.ZK.Worker-pool2-t35] master.RegionStates: Onlined f5836191f2d1a9806269864db4287786 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,587 INFO [AM.ZK.Worker-pool2-t35] master.RegionStates: Offlined f5836191f2d1a9806269864db4287786 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,594 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,594 INFO [AM.ZK.Worker-pool2-t33] master.RegionStates: Transitioned {f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=1e9ca9e463ebd7af1fbd910bd2d570a6, current_state={1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Transitioned {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492} to {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPEN, ts=1464282657625, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 DEBUG [AM.ZK.Worker-pool2-t21] handler.OpenedRegionHandler: Handling OPENED of 1e9ca9e463ebd7af1fbd910bd2d570a6 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,633 DEBUG [AM.ZK.Worker-pool2-t21] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 1e9ca9e463ebd7af1fbd910bd2d570a6 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,633 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Znode MONE,,1447155053918.1e9ca9e463ebd7af1fbd910bd2d570a6. deleted, state: {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPEN, ts=1464282657625, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,633 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Onlined 1e9ca9e463ebd7af1fbd910bd2d570a6 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,633 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Offlined 1e9ca9e463ebd7af1fbd910bd2d570a6 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,645 DEBUG [AM.ZK.Worker-pool2-t37] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,646 INFO [AM.ZK.Worker-pool2-t37] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492} to {44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Transitioned {f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f07d37734f7d70dd47f1545345b772e9 state=OPEN, ts=1464282657657, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 DEBUG [AM.ZK.Worker-pool2-t28] handler.OpenedRegionHandler: Handling OPENED of f07d37734f7d70dd47f1545345b772e9 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,661 DEBUG [AM.ZK.Worker-pool2-t28] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node f07d37734f7d70dd47f1545345b772e9 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,661 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Znode TSO,,1464172262155.f07d37734f7d70dd47f1545345b772e9. deleted, state: {f07d37734f7d70dd47f1545345b772e9 state=OPEN, ts=1464282657657, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,661 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Onlined f07d37734f7d70dd47f1545345b772e9 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,661 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Offlined f07d37734f7d70dd47f1545345b772e9 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,701 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,701 INFO [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492} to {44e206d15b62ed4d452545242bd105cd state=OPEN, ts=1464282657701, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,701 DEBUG [AM.ZK.Worker-pool2-t29] handler.OpenedRegionHandler: Handling OPENED of 44e206d15b62ed4d452545242bd105cd from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,705 DEBUG [AM.ZK.Worker-pool2-t29] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 44e206d15b62ed4d452545242bd105cd in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,705 DEBUG [AM.ZK.Worker-pool2-t38] master.AssignmentManager: Znode CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. deleted, state: {44e206d15b62ed4d452545242bd105cd state=OPEN, ts=1464282657701, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,705 INFO [AM.ZK.Worker-pool2-t38] master.RegionStates: Onlined 44e206d15b62ed4d452545242bd105cd on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,705 INFO [AM.ZK.Worker-pool2-t38] master.RegionStates: Offlined 44e206d15b62ed4d452545242bd105cd from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,711 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=bd20eeb125be62c29b0e19960472e76d, current_state={bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,711 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492} to {bd20eeb125be62c29b0e19960472e76d state=OPEN, ts=1464282657711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,711 DEBUG [AM.ZK.Worker-pool2-t30] handler.OpenedRegionHandler: Handling OPENED of bd20eeb125be62c29b0e19960472e76d from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,714 DEBUG [AM.ZK.Worker-pool2-t30] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node bd20eeb125be62c29b0e19960472e76d in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,714 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Znode MONO,,1450265445296.bd20eeb125be62c29b0e19960472e76d. deleted, state: {bd20eeb125be62c29b0e19960472e76d state=OPEN, ts=1464282657711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,714 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Onlined bd20eeb125be62c29b0e19960472e76d on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,714 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Offlined bd20eeb125be62c29b0e19960472e76d from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:13:42,930 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata1c.corp.arc.com%2C60020%2C1464281939497.1464281944440
2016-05-26 17:13:42,934 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata2c.corp.arc.com%2C60020%2C1464281948047.1464281950096
2016-05-26 17:13:42,938 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata3c.corp.arc.com%2C60020%2C1464281938965.1464281944289
2016-05-26 17:13:42,941 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata3c.corp.arc.com%2C60020%2C1464281938965.1464281946729.meta
... View more
05-26-2016
06:16 PM
No there is no error after 18.01.
... View more
05-26-2016
06:05 PM
Attached is hbase region serverhbase-regionserver.txt log.
... View more
05-26-2016
05:25 PM
Attached is Hbase Master Error Details.hbase-master.txt
... View more
05-26-2016
05:14 PM
I increased value from 10485760B to 31457280B. Now getting following exception- 2016-05-26 15:39:53,949 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:55,266 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:56,773 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:58,785 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:40:01,809 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
... View more
05-26-2016
04:47 PM
Getting below exception- 2016-05-26 16:46:54,288 INFO [main] util.FileSyncLog: containerCell:10412488
2016-05-26 16:46:54,298 INFO [main] util.FileSyncLog: containerCellUpdated:10538784
java.lang.IllegalArgumentException: KeyValue size too large
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1353)
at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:989)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:953)
at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.put(HTablePool.java:432)
at com.bizosys.hsearch.hbase.HTableWrapper.put(HTableWrapper.java:117)
at com.arc.hbase.jobs.CacheBuildJob.saveContainer(CacheBuildJob.java:410)
at com.arc.hbase.jobs.CacheBuildJob.save(CacheBuildJob.java:320)
at com.arc.hbase.jobs.CacheBuildJob.exec(CacheBuildJob.java:171)
at com.arc.hbase.jobs.CacheBuildJob.run(CacheBuildJob.java:75)
at com.arc.hbase.jobs.CacheBuildJob.main(CacheBuildJob.java:509)
... View more
05-26-2016
04:42 PM
Below is code- package com.arc.hbase.jobs;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.hadoop.hbase.client.Durability;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import com.arc.datasink.HBaseTables;
import com.arc.management.MonitorCacheBuildJob;
import com.arc.management.MonitorCollector;
import com.arc.management.MonitorMeasure;
import com.arc.rest.common.BytesMergerContainer;
import com.arc.rest.common.BytesMergerObject;
import com.arc.rest.common.ChangeDecorator;
import com.arc.rest.common.ChangeSetDecorator;
import com.arc.rest.common.ObjectKey;
import com.arc.rest.common.PostChanges_3_0.Changes;
import com.arc.util.ArcConfig;
import com.arc.util.FileSyncLog;
import com.arc.util.LineReaderUtil;
import com.bizosys.hsearch.hbase.HBaseFacade;
import com.bizosys.hsearch.hbase.HTableWrapper;
public class CacheBuildJob
{
private static final boolean INFO_ENABLED = FileSyncLog.l.isInfoEnabled();
private static final char SEPARATOR_OBJID = ',';
private static CacheBuildJob instance = null;
private static final boolean MONITOR_JOB = ArcConfig.MONITOR_CACHE_BUILD_JOB;
public static CacheBuildJob getInstance()
{
if ( null != instance) return instance;
synchronized (CacheBuildJob.class)
{
if ( null != instance ) return instance;
instance = new CacheBuildJob();
}
return instance;
}
long lastUpdatedTime= new Date().getTime() - ArcConfig.CACHE_BUILD_START_INTERVAL;
int lastUpdatedDt = new Date(lastUpdatedTime).getDate();
private CacheBuildJob()
{
}
boolean isRunning = false;
public void run()
{
if ( isRunning )
{
if ( INFO_ENABLED ) FileSyncLog.l.info(new Date().toLocaleString() + " Cache Build Job SKIPPED");
return;
}
isRunning = true;
long start = System.currentTimeMillis();
try
{
exec();
long end = System.currentTimeMillis();
if ( INFO_ENABLED ) FileSyncLog.l.info(new Date().toLocaleString() + " Cache Build Job SUCESS, Run time in ms: " + (end - start));
}
catch (Exception ex)
{
long end = System.currentTimeMillis();
ex.printStackTrace();
FileSyncLog.l.fatal(new Date().toLocaleString() + " Cache Build Job FAILED, Run time in ms: " + (end - start));
}
finally
{
isRunning = false;
}
}
public void exec() throws Exception
{
long currentTime = System.currentTimeMillis();
//Take the clock 2 sec to past
currentTime = currentTime - ArcConfig.CACHEBUILD_JOB_RUN_INTERVAL;
if ( lastUpdatedTime >= currentTime) {
FileSyncLog.l.warn("CacheBuildJob : lastUpdatedTime >= currentTime : " + lastUpdatedTime + ">=" + currentTime);
return;
}
Date now = new Date(currentTime);
long startTime = currentTime;
int currentUpdatedDt = now.getDate();
Map<String, String> uniqueCotainerKeyWithObjectKeys = new HashMap<String, String>(1024);
List<ChangeDecorator.Deser> timeseriesChanges = new ArrayList<ChangeDecorator.Deser>(1024);
if ( INFO_ENABLED) FileSyncLog.l.info("AddToTimeseriesChanges: Start");
if ( MONITOR_JOB ) MonitorCacheBuildJob.getInstance().onEnter();
//Step1 - Get last left rows from the old table
if ( lastUpdatedDt != currentUpdatedDt)
{
// String tsTable = HBaseTables.getLastTimeSeriesTable(currentUpdatedDt);
String tsTable = HBaseTables.getLastTimeSeriesTable(currentTime);
addToTimeseriesChanges(tsTable, startTime, uniqueCotainerKeyWithObjectKeys, timeseriesChanges);
}
if ( INFO_ENABLED) FileSyncLog.l.info("AddToTimeseriesChanges, Changes: " + timeseriesChanges.size() + " Projects:" + uniqueCotainerKeyWithObjectKeys.size());
//Step2 - Get from current table
// String tsTable = HBaseTables.getTimeSeriesTable(currentUpdatedDt);
String tsTable = HBaseTables.getTimeSeriesTable(currentTime);
addToTimeseriesChanges(tsTable, startTime, uniqueCotainerKeyWithObjectKeys, timeseriesChanges);
if ( INFO_ENABLED)
FileSyncLog.l.info("AddToTimeseriesChanges, Changes: " + timeseriesChanges.size() + " Projects:" + uniqueCotainerKeyWithObjectKeys.size());
//Step3 -Merge with cutoff table.
String currentCutoffTableName = HBaseTables.getCutoffTable(currentUpdatedDt);
String lastCutoffTableName = HBaseTables.getLastCutoffTable(currentUpdatedDt);
HBaseFacade facade = null;
HTableWrapper currentCutoffTable = null;
HTableWrapper lastCutoffTable = null;
long cutoffTime = startTime - HBaseTables.CUTOFF_DURATION_SECS * 1000;
/**
* We have all the ChangeDecorators. Next Steps
* >
*/
try {
facade = HBaseFacade.getInstance();
if ( INFO_ENABLED) {
FileSyncLog.l.info("Current Cutoff Table: " + currentCutoffTableName +
" (Size)" + timeseriesChanges.size() + " (Cutoff Limit)" + new Date(cutoffTime));
}
currentCutoffTable = facade.getTable(currentCutoffTableName);
lastCutoffTable = facade.getTable(lastCutoffTableName);
// System.out.println("TimeSeriesTable - "+tsTable+"\tCurrent Cutoff Table - "+ currentCutoffTableName+"\tLast Cutoff Table - "+ lastCutoffTableName + " on " + now.toString() + " time in millis "+ now.getTime() + " currentTime " + currentTime);
int batchSize = ArcConfig.CACHE_BUILD_BATCH_SIZE;
Map<String, ChangeSetDecorator.Deser> objKeyWithChangeSets =
new HashMap<String, ChangeSetDecorator.Deser>(1024);
List<Put> putL = new ArrayList<Put>(batchSize);
for (ChangeDecorator.Deser deserCh : timeseriesChanges)
{
deserCh.touch(System.currentTimeMillis());
String objectKey = deserCh.objectKey;
/**
* Batch Flush on 4096 objects = 4MB
*/
// System.out.println("CacheBuild Time -: "+(System.currentTimeMillis()-deserCh.getTime()));
if ( objKeyWithChangeSets.size() >= batchSize )
{
if ( INFO_ENABLED) FileSyncLog.l.info("Saving: Enter");
save(currentCutoffTable, lastCutoffTable, objKeyWithChangeSets, uniqueCotainerKeyWithObjectKeys, cutoffTime, putL);
if ( INFO_ENABLED) FileSyncLog.l.info("Saving: Exit");
}
/**
* Step: 1 # Memory Table Lookup,
* If any object id changes are already there, means already loaded and read.
* Just merge to it.
*/
if (objKeyWithChangeSets.containsKey(objectKey)) {
if ( INFO_ENABLED) FileSyncLog.l.info("Memory Table Lookup: " + objectKey);
ChangeSetDecorator.Deser mergedVal =
createObjChangeSets(objKeyWithChangeSets.get(objectKey), deserCh, cutoffTime);
mergedVal.key = objectKey;
mergedVal.itemId = deserCh.getChanges().getItemId();
objKeyWithChangeSets.put(objectKey, mergedVal);
continue;
}
Get getter = new Get(objectKey.getBytes());
/**
* Step: 2 # Look in current cutoff Table,
*/
Result resultC = currentCutoffTable.get(getter);
{
if ( null != resultC) {
byte[] val = resultC.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
int valSize = ( null == val) ? 0 : val.length;
if ( valSize == 0 ) val = null;
if ( null != val ) {
if ( INFO_ENABLED) FileSyncLog.l.info("Curent cutoff table Lookup: " + objectKey);
ChangeSetDecorator.Deser cs = new ChangeSetDecorator.Deser(val);
cs.key = objectKey;
cs.itemId = deserCh.getChanges().getItemId();
ChangeSetDecorator.Deser mergedVal = createObjChangeSets(cs, deserCh, cutoffTime);
objKeyWithChangeSets.put(objectKey, mergedVal);
continue;
}
}
}
/**
* Step: 3 # Fall back to last cutoff table as does not exist in
* current cut off table.
*/
Result resultO = lastCutoffTable.get(getter);
if ( null != resultO) {
byte[] val = resultO.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
int valSize = ( null == val) ? 0 : val.length;
if ( valSize == 0 ) val = null;
if ( null != val ) {
if ( INFO_ENABLED) FileSyncLog.l.info("Previous cutoff table Lookup: " + objectKey);
ChangeSetDecorator.Deser cs = new ChangeSetDecorator.Deser(val);
cs.key = objectKey;
cs.itemId = deserCh.getChanges().getItemId();
ChangeSetDecorator.Deser mergedVal = createObjChangeSets(cs, deserCh, cutoffTime);
objKeyWithChangeSets.put(objectKey, mergedVal);
continue;
}
}
/**
* We didn't find in current or last cutoff table.
* It is a fresh change. Boot strap the changes
*/
if ( INFO_ENABLED) FileSyncLog.l.info("Bootstrapping: " + objectKey);
ChangeSetDecorator.Deser none = new ChangeSetDecorator.Deser(null);
none.key = objectKey;
none.itemId = deserCh.getChanges().getItemId();
ChangeSetDecorator.Deser mergedVal = createObjChangeSets(none, deserCh, -1);
objKeyWithChangeSets.put(objectKey, mergedVal);
}
if ( objKeyWithChangeSets.size() >= 0 ) {
save(currentCutoffTable, lastCutoffTable,
objKeyWithChangeSets, uniqueCotainerKeyWithObjectKeys, cutoffTime, putL);
}
/**
* Step: 4 # All sucess, move to next time stamp
*/
lastUpdatedTime = startTime; //maximum timestamp value, exclusive
lastUpdatedDt = currentUpdatedDt;
} catch (Exception ex) {
throw ex;
} finally {
if ( null != facade && null != currentCutoffTable) facade.putTable(currentCutoffTable);
if ( null != facade && null != lastCutoffTable) facade.putTable(lastCutoffTable);
}
long endTime = System.currentTimeMillis();
if ( MONITOR_JOB )
{
long timeTaken = (endTime - startTime);
MonitorCollector collector = new MonitorCollector();
collector.add(new MonitorMeasure("CacheBuildJob", timeTaken));
MonitorCacheBuildJob.getInstance().onExit(collector);
}
}
/**
*
* @param currentCutoffTable
* @param lastCutoffTable
* @param objKeyWithChangeSets
* @param uniqueProjectIdWithObjIds
* @param cutoffTime
* @param putL
* @throws IOException
*/
private void save(HTableWrapper currentCutoffTable, HTableWrapper lastCutoffTable,
Map<String, ChangeSetDecorator.Deser> objKeyWithChangeSets,
Map<String, String> uniqueProjectIdWithObjIds,
long cutoffTime, List<Put> putL)
throws IOException {
putL.clear();
for (String key : objKeyWithChangeSets.keySet()) {
ChangeSetDecorator.Deser val = objKeyWithChangeSets.get(key);
Put update = new Put(key.getBytes());
update.add(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME, val.data);
update.setDurability(Durability.SYNC_WAL);
putL.add(update);
}
currentCutoffTable.put(putL);
if ( INFO_ENABLED) FileSyncLog.l.info("Cutoff Table Objects Added - " + putL.size());
putL.clear();
saveContainer(currentCutoffTable, lastCutoffTable,
objKeyWithChangeSets, uniqueProjectIdWithObjIds, cutoffTime);
currentCutoffTable.flushCommits();
objKeyWithChangeSets.clear();
}
/**
*
* @param currentCutoffTable
* @param lastCutoffTable
* @param objKeyWithChangeSets
* @param uniqueProjectIdWithObjIds
* @param cutoffTime
* @throws IOException
*/
private void saveContainer(HTableWrapper currentCutoffTable,
HTableWrapper lastCutoffTable,
Map<String, ChangeSetDecorator.Deser> objKeyWithChangeSets,
Map<String, String> uniqueProjectIdWithObjIds, long cutoffTime)
throws IOException {
/**
* mergeContainerChanges for the current projects
*/
List<String> objKeyL = new ArrayList<String>();
Set<ChangeSetDecorator.Deser> containerObjects = new HashSet<ChangeSetDecorator.Deser>();
for (String projectId : uniqueProjectIdWithObjIds.keySet()) {
objKeyL.clear();
containerObjects.clear();
/**
* Find out all object Ids belonging to this project and in current set
*/
String objectKeys = uniqueProjectIdWithObjIds.get(projectId);
LineReaderUtil.fastSplit(objKeyL, objectKeys, SEPARATOR_OBJID);
for (String objKey : objKeyL) {
ChangeSetDecorator.Deser val = objKeyWithChangeSets.get(objKey);
if ( null != val) containerObjects.add( val);
}
if ( INFO_ENABLED) FileSyncLog.l.info( "projectId:" + projectId + " ,Objects =" + containerObjects.size());
byte[] projectIdB = projectId.getBytes();
Get containerId = new Get(projectIdB);
/**
* Look the changes in current cutoff table.
*/
byte[] containerCell = null;
Result res = currentCutoffTable.get(containerId);
if ( null != res) {
containerCell = res.getValue(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME);
}
/**
* The project changes are not available in current cutoff table.
*/
int containerCellSize = ( null == containerCell) ? 0 : containerCell.length;
if ( containerCellSize == 0 ) {
res = lastCutoffTable.get(containerId);
if ( null != res) {
containerCell = res.getValue(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME);
}
}
containerCellSize = ( null == containerCell) ? 0 : containerCell.length;
if ( containerCellSize == 0 ) containerCell = null;
/**
* Merge the data
*/
if ( INFO_ENABLED ) FileSyncLog.l.info("containerCell:" +
( (null == containerCell) ? 0 : containerCell.length) ) ;
byte[] containerCellUpdated = BytesMergerContainer.mergeContainerChangesD(
containerCell, containerObjects, cutoffTime, -1L);
if ( INFO_ENABLED ) FileSyncLog.l.info("containerCellUpdated:" +
( (null == containerCellUpdated) ? 0 : containerCellUpdated.length) ) ;
/**
* Save to current cutoff table
*/
Put containerUpdate = new Put(projectIdB);
containerUpdate.add(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME, containerCellUpdated);
containerUpdate.setDurability(Durability.SYNC_WAL);
currentCutoffTable.put(containerUpdate);
}
if ( INFO_ENABLED) FileSyncLog.l.info("Cutoff Table Containers Added - " + uniqueProjectIdWithObjIds.size());
}
/**
*
* @param existingCutoffBytes
* @param currentChanges
* @param cutoffTime
* @return
* @throws IOException
*/
public final ChangeSetDecorator.Deser createObjChangeSets(final ChangeSetDecorator.Deser existingCutoffBytes,
final ChangeDecorator.Deser currentChanges, final long cutoffTime) throws IOException {
byte[] data = BytesMergerObject.mergeObjectChanges(
existingCutoffBytes.data, currentChanges, cutoffTime, -1);
existingCutoffBytes.data = data;
return existingCutoffBytes;
}
/**
*
* @param tsTableCurrent
* @param startTime
* @param uniqueProjectIds
* @param uniqueObjIds
*/
public void addToTimeseriesChanges(String tsTableCurrent, long startTime,
Map<String, String> uniqueContainerKeyWithObjectKeys, List<ChangeDecorator.Deser> timeseriesChanges) {
HBaseFacade facade = null;
HTableWrapper table = null;
ResultScanner scanner = null;
try {
facade = HBaseFacade.getInstance();
table = facade.getTable(tsTableCurrent);
Scan scan = new Scan();
scan.setCaching(1024);
scan.setMaxVersions(1);
scan.setTimeRange(lastUpdatedTime, startTime);
scan = scan.addColumn(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
scanner = table.getScanner(scan);
StringBuilder keyBuilder = new StringBuilder();
int counter = 0;
for (Result r: scanner) {
if ( null == r) continue;
if ( r.isEmpty()) continue;
counter++;
if ( counter % 1000 == 0 ) FileSyncLog.l.info(tsTableCurrent + " read : " + counter);
byte[] changeB = r.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
int changeBSize = ( null == changeB) ? 0 : changeB.length;
if ( changeBSize == 0 ) continue;
if ( INFO_ENABLED) FileSyncLog.l.info("Inside AddToTimeSeries: changeB: "+changeB.toString());
ChangeDecorator.Deser currentChangeDeser = new ChangeDecorator.Deser(changeB);
Changes currentChange = currentChangeDeser.getChanges();
//Add to Unique Projects
String containerKey = ObjectKey.getContainerKey(keyBuilder,currentChange);
String objectKey = ObjectKey.getObjectKey(keyBuilder,currentChange);
currentChangeDeser.objectKey = objectKey;
if (uniqueContainerKeyWithObjectKeys.containsKey(containerKey)) {
uniqueContainerKeyWithObjectKeys.put(containerKey,
uniqueContainerKeyWithObjectKeys.get(containerKey) + SEPARATOR_OBJID + objectKey);
} else {
uniqueContainerKeyWithObjectKeys.put(containerKey, objectKey);
}
//Merge Actions of a Object.
timeseriesChanges.add(currentChangeDeser);
}
} catch (Exception e) {
FileSyncLog.l.fatal("Unable to execute daily update job. " ,e);
} finally {
if (null != scanner) {
try {scanner.close();} catch (Exception ex) {ex.printStackTrace();}
}
if (null != table) {
try {facade.putTable(table);} catch (Exception ex) {ex.printStackTrace();}
}
}
}
public static void main(String[] args) throws Exception {
CacheBuildJob.getInstance().run();
}
}
... View more
05-26-2016
04:30 PM
Here is the region server log on fsdata1c.corp.arc.com 2016-05-26 13:57:47,158 WARN [RpcServer.handler=55,port=60020] ipc.RpcServer: RpcServer.respondercallId: 4324 service: ClientService methodName: Scan size: 30 connection: 10.1.1.243:52740: output error
2016-05-26 13:57:47,159 WARN [RpcServer.handler=55,port=60020] ipc.RpcServer: RpcServer.handler=55,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
2016-05-26 13:58:47,135 INFO [regionserver60020.leaseChecker] regionserver.HRegionServer: Scanner 3235538737043012213 lease expired on region CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd.
2016-05-26 13:58:52,422 INFO [RpcServer.reader=8,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2224)
at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1415)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:790)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:581)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:556)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
2016-05-26 13:58:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12076, accesses=3683357, hits=3525415, hitRatio=95.71%, , cachingAccesses=3547447, cachingHits=3448937, cachingHitsRatio=97.22%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 13:59:52,420 INFO [regionserver60020.leaseChecker] regionserver.HRegionServer: Scanner 594209239597513333 lease expired on region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894.
2016-05-26 14:03:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3720308, hits=3562365, hitRatio=95.75%, , cachingAccesses=3584398, cachingHits=3485887, cachingHitsRatio=97.25%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:08:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3749373, hits=3591430, hitRatio=95.79%, , cachingAccesses=3613463, cachingHits=3514952, cachingHitsRatio=97.27%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:13:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3769032, hits=3611089, hitRatio=95.81%, , cachingAccesses=3633122, cachingHits=3534611, cachingHitsRatio=97.29%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:18:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3769844, hits=3611901, hitRatio=95.81%, , cachingAccesses=3633934, cachingHits=3535423, cachingHitsRatio=97.29%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:23:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3831804, hits=3673861, hitRatio=95.88%, , cachingAccesses=3695894, cachingHits=3597383, cachingHitsRatio=97.33%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:28:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3832074, hits=3674131, hitRatio=95.88%, , cachingAccesses=3696164, cachingHits=3597653, cachingHitsRatio=97.33%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:33:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3844554, hits=3686611, hitRatio=95.89%, , cachingAccesses=3708644, cachingHits=3610133, cachingHitsRatio=97.34%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:38:11,712 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894. after a delay of 15478
2016-05-26 14:38:21,712 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894. after a delay of 19133
2016-05-26 14:38:27,190 DEBUG [Thread-20] regionserver.HRegion: Started memstore flush for CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894., current region memstore size 93.2 M
2016-05-26 14:38:27,401 INFO [Thread-20] regionserver.DefaultStoreFlusher: Flushed, sequenceid=96047, memsize=31.7 M, hasBloomFilter=true, into tmp file hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/.tmp/6d1e9fe6186a448f9a322c73ecf4ad0a
2016-05-26 14:38:27,411 DEBUG [Thread-20] regionserver.HRegionFileSystem: Committing store file hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/.tmp/6d1e9fe6186a448f9a322c73ecf4ad0a as hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/1/6d1e9fe6186a448f9a322c73ecf4ad0a
... View more
05-26-2016
04:25 PM
Thanks. Restarting App timeline server worked for me.
... View more