Member since
01-06-2016
54
Posts
15
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1290 | 06-24-2016 06:18 AM | |
306 | 03-18-2016 12:40 PM | |
3748 | 03-18-2016 06:28 AM | |
1556 | 03-08-2016 10:02 AM |
08-08-2018
06:03 AM
This error is because of symbolic link is broken for (Means hbase client jar is not present in the node) - ll /usr/hdp/current/hbase-client/lib/hbase-client.jar Put your hbase client jar from another node to the non working node- /usr/hdp/current/hbase-client/lib/hbase-client-*.jar Please note that, if you don't have other jars as well, then copy all the necessary missing jars from other node- /usr/hdp/current/
... View more
08-24-2016
01:43 PM
Thanks a lot @Victor Xu. All points are clear.
... View more
08-23-2016
09:05 AM
Hi @Victor Xu, Thanks. I understand your point. I have couple of questions here to understand the scenario more clearly- 1. If I put data in temporary hbase cluster during main hbase cluster downtime, then how I will merge data from temporary cluster to main cluster when main cluster will be up and running. 2. When I am restoring data from hdfs hfile location to new location, then how I will recover memstore data. 3. If I shutdown restart hbase service, is memstore data being flushed to hdfs hfile that time? Thanks, Raja
... View more
08-22-2016
03:15 PM
Hi @Victor Xu, I followed your steps. It is working fine. But i needed to restart hbase Can you please suggest me any other way where I don't need to restart hbase service. Thanks, Raja
... View more
08-22-2016
03:04 PM
Hi @Victor Xu, I followed your steps. It is working fine. But i needed to restart hbase Can you please suggest me any other way where I don't need to restart hbase service. Thanks, Raja
... View more
08-22-2016
06:36 AM
Thanks Victor. I will follow your steps and will let you know.
... View more
08-22-2016
04:01 AM
1 Kudo
My old hdfs data directory location - /apps/hbase/data My new hdfs data directory location - /apps/hbase/data2 Hbase table Name - CUTOFF2 create 'CUTOFF2', {NAME => '1'} I am doing following steps to recover data. But not working. Please tell me where I am wrong- hadoop fs -ls /apps/hbase/data/data/default/CUTOFF2/4c8d68c329cdb6d73d4094fd64e5e37d/1/d321dfcd3b1245d2b5cc2ec1aab3a9f2
hadoop fs -ls /apps/hbase/data2/data/default/CUTOFF2/8f1aff44991e1a08c6a6bbf9c2546cf6/1 put 'CUTOFF2' , 'samplerow', '1:1' , 'sampledata'
count 'CUTOFF2' su - hbase hadoop fs -cp /apps/hbase/data/data/default/CUTOFF2/4c8d68c329cdb6d73d4094fd64e5e37d/1/d321dfcd3b1245d2b5cc2ec1aab3a9f2 /apps/hbase/data2/data/default/CUTOFF2/8f1aff44991e1a08c6a6bbf9c2546cf6/1 major_compact 'CUTOFF2' Please correct my steps so recovery works.
... View more
Labels:
- Labels:
-
Apache HBase
08-08-2016
02:22 PM
Unable to scan hbase table. getting following error- how to recover table. scan 'CUTOFF8'- ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region CUTOFF8,,1465897349742.2077c5dfbfb97d67f09120e4b9cdc15a. is not online on data1.corp.mirrorplus.com,16020,1470665536454
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2235)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) hbase master log- 2016-08-08 09:04:45,112 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 12866.800703353128 msec.
2016-08-08 09:04:57,979 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
2016-08-08 09:04:57,979 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
2016-08-08 09:04:57,979 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: DFS Read
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:945)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:604)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
at java.io.DataInputStream.read(DataInputStream.java:100)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:737)
at com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:701)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.<init>(HBaseProtos.java:10616)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.<init>(HBaseProtos.java:10580)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription$1.parsePartialFrom(HBaseProtos.java:10694)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription$1.parsePartialFrom(HBaseProtos.java:10689)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.parseFrom(HBaseProtos.java:11177)
at org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:307)
at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.getHFileNames(SnapshotReferenceUtil.java:328)
at org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner$1.filesUnderSnapshot(SnapshotHFileCleaner.java:85)
at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:281)
at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.getUnreferencedFiles(SnapshotFileCache.java:187)
at org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner.getDeletableFiles(SnapshotHFileCleaner.java:62)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:233)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:185)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
... View more
- Tags:
- Data Processing
- HBase
Labels:
- Labels:
-
Apache HBase
06-24-2016
06:18 AM
Hi, I found out the problem. One of the Data node got rebooted. That's why this ind of log was written. Thanks.
... View more
06-23-2016
07:37 AM
Hi, Datanode getting down for following reason. Can you please tell me the root cause and resolution- 2016-05-31 06:38:45,807 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(295)) - Deleted BP-838165258-10.1.1.94-1459246457024 blk_1073790458_49647 file /var/log/hadoop/hdfs/data/current/BP-838165258-10.1.1.94-1459246457024/current/finalized/subdir0/subdir189/blk_1073790458
2016-05-31 06:38:45,808 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(295)) - Deleted BP-838165258-10.1.1.94-1459246457024 blk_1073790460_49649 file /var/log/hadoop/hdfs/data/current/BP-838165258-10.1.1.94-1459246457024/current/finalized/subdir0/subdir189/blk_1073790460
2016-05-31 06:38:50,917 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790961_50150 src: /10.1.1.30:56265 dest: /10.1.1.29:50010
2016-05-31 06:38:50,987 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56265, dest: /10.1.1.29:50010, bytes: 4688706, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790961_50150, duration: 61792605
2016-05-31 06:38:50,988 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790961_50150, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:39:17,899 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,900 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 1, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,901 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 2, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,902 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 3, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,903 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 4, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,904 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 5, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,904 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 6, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,905 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 7, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,905 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 8, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,907 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 9, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 10, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 11, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 12, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,908 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 13, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,909 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 14, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,909 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 15, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:17,909 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: b51fe9cee4cd76c97452ee0bfcf62919, slotIdx: 16, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, success: true
2016-05-31 06:39:23,630 ERROR datanode.DataNode (DataXceiver.java:run(278)) - data1.corp.mirrorplus.com:50010:DataXceiver error processing unknown operation src: /127.0.0.1:43209 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:745)
2016-05-31 06:39:30,882 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790962_50151 src: /10.1.1.30:56392 dest: /10.1.1.29:50010
2016-05-31 06:39:30,902 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56169, dest: /10.1.1.29:50010, bytes: 130970563, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790960_50149, duration: 59970347965
2016-05-31 06:39:30,902 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790960_50149, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790497_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790499_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790505_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790513_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790515_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790523_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790525_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,498 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790527_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,499 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790529_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:50,499 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790531_0 on volume /hadoop/hdfs1/hadoop/hdfs/data
2016-05-31 06:39:56,902 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790963_50152 src: /10.1.1.30:56483 dest: /10.1.1.29:50010
2016-05-31 06:39:56,968 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56483, dest: /10.1.1.29:50010, bytes: 4694274, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790963_50152, duration: 61182280
2016-05-31 06:39:56,968 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790963_50152, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:39:57,014 INFO datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-838165258-10.1.1.94-1459246457024:blk_1073790964_50153 src: /10.1.1.30:56488 dest: /10.1.1.29:50010
2016-05-31 06:39:57,438 INFO DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1432)) - src: /10.1.1.30:56488, dest: /10.1.1.29:50010, bytes: 31717025, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-905108031_1, offset: 0, srvID: 0362bd37-7e9f-4f43-8f6b-af1d42314e63, blockid: BP-838165258-10.1.1.94-1459246457024:blk_1073790964_50153, duration: 420854449
2016-05-31 06:39:57,438 INFO datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-838165258-10.1.1.94-1459246457024:blk_1073790964_50153, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-05-31 06:40:23,622 ERROR datanode.DataNode (DataXceiver.java:run(278)) - data1.corp.mirrorplus.com:50010:DataXceiver error processing unknown operation src: /127.0.0.1:43354 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:745)
2016-05-31 06:40:30,505 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790484_0 on volume /var/log/hadoop/hdfs/data
2016-05-31 06:40:30,505 INFO datanode.VolumeScanner (VolumeScanner.java:scanBlock(418)) - FileNotFound while finding block BP-838165258-10.1.1.94-1459246457024:blk_1073790492_0 on volume /var/log/hadoop/hdfs/data
... View more
Labels:
- Labels:
-
Apache Hadoop
05-26-2016
06:18 PM
Below is server log (hbase master log)- 2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t24] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t40] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,029 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,049 DEBUG [AM.ZK.Worker-pool2-t25] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=fc6e430b85da20d6ede2cd47a8288519, current_state={fc6e430b85da20d6ede2cd47a8288519 state=PENDING_OPEN, ts=1464282656677, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,049 INFO [AM.ZK.Worker-pool2-t25] master.RegionStates: Transitioned {fc6e430b85da20d6ede2cd47a8288519 state=PENDING_OPEN, ts=1464282656677, server=fsdata1c.corp.arc.com,60020,1464282355492} to {fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Transitioned {047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492} to {047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 INFO [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492} to {87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,162 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling RS_ZK_REGION_CLOSED, server=fsdata2c.corp.arc.com,60020,1464282352148, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_CLOSE, ts=1464282656595, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,162 DEBUG [AM.ZK.Worker-pool2-t30] handler.ClosedRegionHandler: Handling CLOSED event for 44e206d15b62ed4d452545242bd105cd
2016-05-26 17:10:57,162 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=PENDING_CLOSE, ts=1464282656595, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=CLOSED, ts=1464282657162, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Found an existing plan for CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. destination server is fsdata1c.corp.arc.com,60020,1464282355492 accepted as a dest server = true
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Using pre-existing plan for CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd.; plan=hri=CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd., src=fsdata2c.corp.arc.com,60020,1464282352148, dest=fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,163 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=CLOSED, ts=1464282657162, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=OFFLINE, ts=1464282657163, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Creating (or updating) unassigned node 44e206d15b62ed4d452545242bd105cd with OFFLINE state
2016-05-26 17:10:57,167 INFO [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Assigning CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. to fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,167 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=OFFLINE, ts=1464282657163, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,176 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Transitioned {047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492} to {047c1d0ed29a28ddb22b3cbbcb787675 state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t31] handler.OpenedRegionHandler: Handling OPENED of 047c1d0ed29a28ddb22b3cbbcb787675 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t27] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 INFO [AM.ZK.Worker-pool2-t27] master.RegionStates: Transitioned {87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492} to {87df75ff7669a91c17cd903d5a9f3a3e state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t27] handler.OpenedRegionHandler: Handling OPENED of 87df75ff7669a91c17cd903d5a9f3a3e from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,404 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=fc6e430b85da20d6ede2cd47a8288519, current_state={fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,404 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Transitioned {fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492} to {fc6e430b85da20d6ede2cd47a8288519 state=OPEN, ts=1464282657404, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,404 DEBUG [AM.ZK.Worker-pool2-t34] handler.OpenedRegionHandler: Handling OPENED of fc6e430b85da20d6ede2cd47a8288519 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,408 DEBUG [AM.ZK.Worker-pool2-t31] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 047c1d0ed29a28ddb22b3cbbcb787675 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,409 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Znode CUTOFF3,C31\x09166,1463559795389.047c1d0ed29a28ddb22b3cbbcb787675. deleted, state: {047c1d0ed29a28ddb22b3cbbcb787675 state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,410 DEBUG [AM.ZK.Worker-pool2-t34] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node fc6e430b85da20d6ede2cd47a8288519 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Onlined 047c1d0ed29a28ddb22b3cbbcb787675 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Offlined 047c1d0ed29a28ddb22b3cbbcb787675 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,409 DEBUG [AM.ZK.Worker-pool2-t27] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 87df75ff7669a91c17cd903d5a9f3a3e in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,411 DEBUG [AM.ZK.Worker-pool2-t27] master.AssignmentManager: Znode MON,O11\x09154548\x093\x09154524183,1456831930257.87df75ff7669a91c17cd903d5a9f3a3e. deleted, state: {87df75ff7669a91c17cd903d5a9f3a3e state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t27] master.RegionStates: Onlined 87df75ff7669a91c17cd903d5a9f3a3e on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t27] master.RegionStates: Offlined 87df75ff7669a91c17cd903d5a9f3a3e from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,411 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Znode MON,O11\x09154769\x093\x09151995813,1443695471549.fc6e430b85da20d6ede2cd47a8288519. deleted, state: {fc6e430b85da20d6ede2cd47a8288519 state=OPEN, ts=1464282657404, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Onlined fc6e430b85da20d6ede2cd47a8288519 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Offlined fc6e430b85da20d6ede2cd47a8288519 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,415 DEBUG [AM.ZK.Worker-pool2-t23] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 INFO [AM.ZK.Worker-pool2-t23] master.RegionStates: Transitioned {ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492} to {ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 DEBUG [AM.ZK.Worker-pool2-t36] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=016ed857886cbcc5eb088b069484218e, current_state={016ed857886cbcc5eb088b069484218e state=PENDING_OPEN, ts=1464282656633, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 INFO [AM.ZK.Worker-pool2-t36] master.RegionStates: Transitioned {016ed857886cbcc5eb088b069484218e state=PENDING_OPEN, ts=1464282656633, server=fsdata1c.corp.arc.com,60020,1464282355492} to {016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,416 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,416 INFO [AM.ZK.Worker-pool2-t33] master.RegionStates: Transitioned {af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492} to {af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=016ed857886cbcc5eb088b069484218e, current_state={016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Transitioned {016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492} to {016ed857886cbcc5eb088b069484218e state=OPEN, ts=1464282657508, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 DEBUG [AM.ZK.Worker-pool2-t21] handler.OpenedRegionHandler: Handling OPENED of 016ed857886cbcc5eb088b069484218e from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,512 DEBUG [AM.ZK.Worker-pool2-t21] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 016ed857886cbcc5eb088b069484218e in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,512 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Znode TSE,,1464258676776.016ed857886cbcc5eb088b069484218e. deleted, state: {016ed857886cbcc5eb088b069484218e state=OPEN, ts=1464282657508, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,512 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Onlined 016ed857886cbcc5eb088b069484218e on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,512 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Offlined 016ed857886cbcc5eb088b069484218e from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t37] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f5836191f2d1a9806269864db4287786, current_state={f5836191f2d1a9806269864db4287786 state=PENDING_OPEN, ts=1464282656656, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 INFO [AM.ZK.Worker-pool2-t37] master.RegionStates: Transitioned {f5836191f2d1a9806269864db4287786 state=PENDING_OPEN, ts=1464282656656, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 INFO [AM.ZK.Worker-pool2-t22] master.RegionStates: Transitioned {af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492} to {af3ed22bb54a2052eaca660332714462 state=OPEN, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t22] handler.OpenedRegionHandler: Handling OPENED of af3ed22bb54a2052eaca660332714462 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,522 DEBUG [AM.ZK.Worker-pool2-t22] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node af3ed22bb54a2052eaca660332714462 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,522 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Znode CUTOFF2,C31\x09164,1462813183940.af3ed22bb54a2052eaca660332714462. deleted, state: {af3ed22bb54a2052eaca660332714462 state=OPEN, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,522 INFO [AM.ZK.Worker-pool2-t22] master.RegionStates: Onlined af3ed22bb54a2052eaca660332714462 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,522 INFO [AM.ZK.Worker-pool2-t22] master.RegionStates: Offlined af3ed22bb54a2052eaca660332714462 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,526 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=a438c3fccb4dffce6c3f2fb2a217ff18, current_state={a438c3fccb4dffce6c3f2fb2a217ff18 state=PENDING_OPEN, ts=1464282656691, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,526 INFO [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {a438c3fccb4dffce6c3f2fb2a217ff18 state=PENDING_OPEN, ts=1464282656691, server=fsdata1c.corp.arc.com,60020,1464282355492} to {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,535 DEBUG [AM.ZK.Worker-pool2-t39] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,535 INFO [AM.ZK.Worker-pool2-t39] master.RegionStates: Transitioned {ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492} to {ff0af29a7f8e111fc4d46c7a30a17459 state=OPEN, ts=1464282657535, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,536 DEBUG [AM.ZK.Worker-pool2-t39] handler.OpenedRegionHandler: Handling OPENED of ff0af29a7f8e111fc4d46c7a30a17459 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,539 DEBUG [AM.ZK.Worker-pool2-t39] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node ff0af29a7f8e111fc4d46c7a30a17459 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,540 DEBUG [AM.ZK.Worker-pool2-t39] master.AssignmentManager: Znode MONE,O31\x09156336\x093\x09152045625,1463618251297.ff0af29a7f8e111fc4d46c7a30a17459. deleted, state: {ff0af29a7f8e111fc4d46c7a30a17459 state=OPEN, ts=1464282657535, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,540 INFO [AM.ZK.Worker-pool2-t39] master.RegionStates: Onlined ff0af29a7f8e111fc4d46c7a30a17459 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,540 INFO [AM.ZK.Worker-pool2-t39] master.RegionStates: Offlined ff0af29a7f8e111fc4d46c7a30a17459 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,543 DEBUG [AM.ZK.Worker-pool2-t26] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=1e9ca9e463ebd7af1fbd910bd2d570a6, current_state={1e9ca9e463ebd7af1fbd910bd2d570a6 state=PENDING_OPEN, ts=1464282656630, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,543 INFO [AM.ZK.Worker-pool2-t26] master.RegionStates: Transitioned {1e9ca9e463ebd7af1fbd910bd2d570a6 state=PENDING_OPEN, ts=1464282656630, server=fsdata1c.corp.arc.com,60020,1464282355492} to {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,570 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=a438c3fccb4dffce6c3f2fb2a217ff18, current_state={a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,570 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Transitioned {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492} to {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPEN, ts=1464282657570, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,571 DEBUG [AM.ZK.Worker-pool2-t31] handler.OpenedRegionHandler: Handling OPENED of a438c3fccb4dffce6c3f2fb2a217ff18 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,574 DEBUG [AM.ZK.Worker-pool2-t31] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node a438c3fccb4dffce6c3f2fb2a217ff18 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,574 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Znode CUTOFF1,,1463913041538.a438c3fccb4dffce6c3f2fb2a217ff18. deleted, state: {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPEN, ts=1464282657570, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,574 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Onlined a438c3fccb4dffce6c3f2fb2a217ff18 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,574 INFO [AM.ZK.Worker-pool2-t31] master.RegionStates: Offlined a438c3fccb4dffce6c3f2fb2a217ff18 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,576 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=bd20eeb125be62c29b0e19960472e76d, current_state={bd20eeb125be62c29b0e19960472e76d state=PENDING_OPEN, ts=1464282656702, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,576 INFO [AM.ZK.Worker-pool2-t34] master.RegionStates: Transitioned {bd20eeb125be62c29b0e19960472e76d state=PENDING_OPEN, ts=1464282656702, server=fsdata1c.corp.arc.com,60020,1464282355492} to {bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 DEBUG [AM.ZK.Worker-pool2-t35] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f5836191f2d1a9806269864db4287786, current_state={f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 INFO [AM.ZK.Worker-pool2-t35] master.RegionStates: Transitioned {f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f5836191f2d1a9806269864db4287786 state=OPEN, ts=1464282657584, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 DEBUG [AM.ZK.Worker-pool2-t35] handler.OpenedRegionHandler: Handling OPENED of f5836191f2d1a9806269864db4287786 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,587 DEBUG [AM.ZK.Worker-pool2-t35] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node f5836191f2d1a9806269864db4287786 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,587 DEBUG [AM.ZK.Worker-pool2-t35] master.AssignmentManager: Znode MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786. deleted, state: {f5836191f2d1a9806269864db4287786 state=OPEN, ts=1464282657584, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,587 INFO [AM.ZK.Worker-pool2-t35] master.RegionStates: Onlined f5836191f2d1a9806269864db4287786 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,587 INFO [AM.ZK.Worker-pool2-t35] master.RegionStates: Offlined f5836191f2d1a9806269864db4287786 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,594 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,594 INFO [AM.ZK.Worker-pool2-t33] master.RegionStates: Transitioned {f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=1e9ca9e463ebd7af1fbd910bd2d570a6, current_state={1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Transitioned {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492} to {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPEN, ts=1464282657625, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 DEBUG [AM.ZK.Worker-pool2-t21] handler.OpenedRegionHandler: Handling OPENED of 1e9ca9e463ebd7af1fbd910bd2d570a6 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,633 DEBUG [AM.ZK.Worker-pool2-t21] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 1e9ca9e463ebd7af1fbd910bd2d570a6 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,633 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Znode MONE,,1447155053918.1e9ca9e463ebd7af1fbd910bd2d570a6. deleted, state: {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPEN, ts=1464282657625, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,633 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Onlined 1e9ca9e463ebd7af1fbd910bd2d570a6 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,633 INFO [AM.ZK.Worker-pool2-t21] master.RegionStates: Offlined 1e9ca9e463ebd7af1fbd910bd2d570a6 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,645 DEBUG [AM.ZK.Worker-pool2-t37] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,646 INFO [AM.ZK.Worker-pool2-t37] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492} to {44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Transitioned {f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f07d37734f7d70dd47f1545345b772e9 state=OPEN, ts=1464282657657, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 DEBUG [AM.ZK.Worker-pool2-t28] handler.OpenedRegionHandler: Handling OPENED of f07d37734f7d70dd47f1545345b772e9 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,661 DEBUG [AM.ZK.Worker-pool2-t28] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node f07d37734f7d70dd47f1545345b772e9 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,661 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Znode TSO,,1464172262155.f07d37734f7d70dd47f1545345b772e9. deleted, state: {f07d37734f7d70dd47f1545345b772e9 state=OPEN, ts=1464282657657, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,661 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Onlined f07d37734f7d70dd47f1545345b772e9 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,661 INFO [AM.ZK.Worker-pool2-t28] master.RegionStates: Offlined f07d37734f7d70dd47f1545345b772e9 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,701 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,701 INFO [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492} to {44e206d15b62ed4d452545242bd105cd state=OPEN, ts=1464282657701, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,701 DEBUG [AM.ZK.Worker-pool2-t29] handler.OpenedRegionHandler: Handling OPENED of 44e206d15b62ed4d452545242bd105cd from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,705 DEBUG [AM.ZK.Worker-pool2-t29] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 44e206d15b62ed4d452545242bd105cd in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,705 DEBUG [AM.ZK.Worker-pool2-t38] master.AssignmentManager: Znode CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. deleted, state: {44e206d15b62ed4d452545242bd105cd state=OPEN, ts=1464282657701, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,705 INFO [AM.ZK.Worker-pool2-t38] master.RegionStates: Onlined 44e206d15b62ed4d452545242bd105cd on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,705 INFO [AM.ZK.Worker-pool2-t38] master.RegionStates: Offlined 44e206d15b62ed4d452545242bd105cd from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,711 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=bd20eeb125be62c29b0e19960472e76d, current_state={bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,711 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492} to {bd20eeb125be62c29b0e19960472e76d state=OPEN, ts=1464282657711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,711 DEBUG [AM.ZK.Worker-pool2-t30] handler.OpenedRegionHandler: Handling OPENED of bd20eeb125be62c29b0e19960472e76d from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,714 DEBUG [AM.ZK.Worker-pool2-t30] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node bd20eeb125be62c29b0e19960472e76d in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,714 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Znode MONO,,1450265445296.bd20eeb125be62c29b0e19960472e76d. deleted, state: {bd20eeb125be62c29b0e19960472e76d state=OPEN, ts=1464282657711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,714 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Onlined bd20eeb125be62c29b0e19960472e76d on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,714 INFO [AM.ZK.Worker-pool2-t30] master.RegionStates: Offlined bd20eeb125be62c29b0e19960472e76d from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:13:42,930 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata1c.corp.arc.com%2C60020%2C1464281939497.1464281944440
2016-05-26 17:13:42,934 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata2c.corp.arc.com%2C60020%2C1464281948047.1464281950096
2016-05-26 17:13:42,938 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata3c.corp.arc.com%2C60020%2C1464281938965.1464281944289
2016-05-26 17:13:42,941 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata3c.corp.arc.com%2C60020%2C1464281938965.1464281946729.meta
... View more
05-26-2016
06:16 PM
No there is no error after 18.01.
... View more
05-26-2016
06:05 PM
Attached is hbase region serverhbase-regionserver.txt log.
... View more
05-26-2016
05:25 PM
Attached is Hbase Master Error Details.hbase-master.txt
... View more
05-26-2016
05:14 PM
I increased value from 10485760B to 31457280B. Now getting following exception- 2016-05-26 15:39:53,949 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:55,266 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:56,773 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:58,785 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:40:01,809 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
... View more
05-26-2016
04:47 PM
Getting below exception- 2016-05-26 16:46:54,288 INFO [main] util.FileSyncLog: containerCell:10412488
2016-05-26 16:46:54,298 INFO [main] util.FileSyncLog: containerCellUpdated:10538784
java.lang.IllegalArgumentException: KeyValue size too large
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1353)
at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:989)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:953)
at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.put(HTablePool.java:432)
at com.bizosys.hsearch.hbase.HTableWrapper.put(HTableWrapper.java:117)
at com.arc.hbase.jobs.CacheBuildJob.saveContainer(CacheBuildJob.java:410)
at com.arc.hbase.jobs.CacheBuildJob.save(CacheBuildJob.java:320)
at com.arc.hbase.jobs.CacheBuildJob.exec(CacheBuildJob.java:171)
at com.arc.hbase.jobs.CacheBuildJob.run(CacheBuildJob.java:75)
at com.arc.hbase.jobs.CacheBuildJob.main(CacheBuildJob.java:509)
... View more
05-26-2016
04:42 PM
Below is code- package com.arc.hbase.jobs;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.hadoop.hbase.client.Durability;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import com.arc.datasink.HBaseTables;
import com.arc.management.MonitorCacheBuildJob;
import com.arc.management.MonitorCollector;
import com.arc.management.MonitorMeasure;
import com.arc.rest.common.BytesMergerContainer;
import com.arc.rest.common.BytesMergerObject;
import com.arc.rest.common.ChangeDecorator;
import com.arc.rest.common.ChangeSetDecorator;
import com.arc.rest.common.ObjectKey;
import com.arc.rest.common.PostChanges_3_0.Changes;
import com.arc.util.ArcConfig;
import com.arc.util.FileSyncLog;
import com.arc.util.LineReaderUtil;
import com.bizosys.hsearch.hbase.HBaseFacade;
import com.bizosys.hsearch.hbase.HTableWrapper;
public class CacheBuildJob
{
private static final boolean INFO_ENABLED = FileSyncLog.l.isInfoEnabled();
private static final char SEPARATOR_OBJID = ',';
private static CacheBuildJob instance = null;
private static final boolean MONITOR_JOB = ArcConfig.MONITOR_CACHE_BUILD_JOB;
public static CacheBuildJob getInstance()
{
if ( null != instance) return instance;
synchronized (CacheBuildJob.class)
{
if ( null != instance ) return instance;
instance = new CacheBuildJob();
}
return instance;
}
long lastUpdatedTime= new Date().getTime() - ArcConfig.CACHE_BUILD_START_INTERVAL;
int lastUpdatedDt = new Date(lastUpdatedTime).getDate();
private CacheBuildJob()
{
}
boolean isRunning = false;
public void run()
{
if ( isRunning )
{
if ( INFO_ENABLED ) FileSyncLog.l.info(new Date().toLocaleString() + " Cache Build Job SKIPPED");
return;
}
isRunning = true;
long start = System.currentTimeMillis();
try
{
exec();
long end = System.currentTimeMillis();
if ( INFO_ENABLED ) FileSyncLog.l.info(new Date().toLocaleString() + " Cache Build Job SUCESS, Run time in ms: " + (end - start));
}
catch (Exception ex)
{
long end = System.currentTimeMillis();
ex.printStackTrace();
FileSyncLog.l.fatal(new Date().toLocaleString() + " Cache Build Job FAILED, Run time in ms: " + (end - start));
}
finally
{
isRunning = false;
}
}
public void exec() throws Exception
{
long currentTime = System.currentTimeMillis();
//Take the clock 2 sec to past
currentTime = currentTime - ArcConfig.CACHEBUILD_JOB_RUN_INTERVAL;
if ( lastUpdatedTime >= currentTime) {
FileSyncLog.l.warn("CacheBuildJob : lastUpdatedTime >= currentTime : " + lastUpdatedTime + ">=" + currentTime);
return;
}
Date now = new Date(currentTime);
long startTime = currentTime;
int currentUpdatedDt = now.getDate();
Map<String, String> uniqueCotainerKeyWithObjectKeys = new HashMap<String, String>(1024);
List<ChangeDecorator.Deser> timeseriesChanges = new ArrayList<ChangeDecorator.Deser>(1024);
if ( INFO_ENABLED) FileSyncLog.l.info("AddToTimeseriesChanges: Start");
if ( MONITOR_JOB ) MonitorCacheBuildJob.getInstance().onEnter();
//Step1 - Get last left rows from the old table
if ( lastUpdatedDt != currentUpdatedDt)
{
// String tsTable = HBaseTables.getLastTimeSeriesTable(currentUpdatedDt);
String tsTable = HBaseTables.getLastTimeSeriesTable(currentTime);
addToTimeseriesChanges(tsTable, startTime, uniqueCotainerKeyWithObjectKeys, timeseriesChanges);
}
if ( INFO_ENABLED) FileSyncLog.l.info("AddToTimeseriesChanges, Changes: " + timeseriesChanges.size() + " Projects:" + uniqueCotainerKeyWithObjectKeys.size());
//Step2 - Get from current table
// String tsTable = HBaseTables.getTimeSeriesTable(currentUpdatedDt);
String tsTable = HBaseTables.getTimeSeriesTable(currentTime);
addToTimeseriesChanges(tsTable, startTime, uniqueCotainerKeyWithObjectKeys, timeseriesChanges);
if ( INFO_ENABLED)
FileSyncLog.l.info("AddToTimeseriesChanges, Changes: " + timeseriesChanges.size() + " Projects:" + uniqueCotainerKeyWithObjectKeys.size());
//Step3 -Merge with cutoff table.
String currentCutoffTableName = HBaseTables.getCutoffTable(currentUpdatedDt);
String lastCutoffTableName = HBaseTables.getLastCutoffTable(currentUpdatedDt);
HBaseFacade facade = null;
HTableWrapper currentCutoffTable = null;
HTableWrapper lastCutoffTable = null;
long cutoffTime = startTime - HBaseTables.CUTOFF_DURATION_SECS * 1000;
/**
* We have all the ChangeDecorators. Next Steps
* >
*/
try {
facade = HBaseFacade.getInstance();
if ( INFO_ENABLED) {
FileSyncLog.l.info("Current Cutoff Table: " + currentCutoffTableName +
" (Size)" + timeseriesChanges.size() + " (Cutoff Limit)" + new Date(cutoffTime));
}
currentCutoffTable = facade.getTable(currentCutoffTableName);
lastCutoffTable = facade.getTable(lastCutoffTableName);
// System.out.println("TimeSeriesTable - "+tsTable+"\tCurrent Cutoff Table - "+ currentCutoffTableName+"\tLast Cutoff Table - "+ lastCutoffTableName + " on " + now.toString() + " time in millis "+ now.getTime() + " currentTime " + currentTime);
int batchSize = ArcConfig.CACHE_BUILD_BATCH_SIZE;
Map<String, ChangeSetDecorator.Deser> objKeyWithChangeSets =
new HashMap<String, ChangeSetDecorator.Deser>(1024);
List<Put> putL = new ArrayList<Put>(batchSize);
for (ChangeDecorator.Deser deserCh : timeseriesChanges)
{
deserCh.touch(System.currentTimeMillis());
String objectKey = deserCh.objectKey;
/**
* Batch Flush on 4096 objects = 4MB
*/
// System.out.println("CacheBuild Time -: "+(System.currentTimeMillis()-deserCh.getTime()));
if ( objKeyWithChangeSets.size() >= batchSize )
{
if ( INFO_ENABLED) FileSyncLog.l.info("Saving: Enter");
save(currentCutoffTable, lastCutoffTable, objKeyWithChangeSets, uniqueCotainerKeyWithObjectKeys, cutoffTime, putL);
if ( INFO_ENABLED) FileSyncLog.l.info("Saving: Exit");
}
/**
* Step: 1 # Memory Table Lookup,
* If any object id changes are already there, means already loaded and read.
* Just merge to it.
*/
if (objKeyWithChangeSets.containsKey(objectKey)) {
if ( INFO_ENABLED) FileSyncLog.l.info("Memory Table Lookup: " + objectKey);
ChangeSetDecorator.Deser mergedVal =
createObjChangeSets(objKeyWithChangeSets.get(objectKey), deserCh, cutoffTime);
mergedVal.key = objectKey;
mergedVal.itemId = deserCh.getChanges().getItemId();
objKeyWithChangeSets.put(objectKey, mergedVal);
continue;
}
Get getter = new Get(objectKey.getBytes());
/**
* Step: 2 # Look in current cutoff Table,
*/
Result resultC = currentCutoffTable.get(getter);
{
if ( null != resultC) {
byte[] val = resultC.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
int valSize = ( null == val) ? 0 : val.length;
if ( valSize == 0 ) val = null;
if ( null != val ) {
if ( INFO_ENABLED) FileSyncLog.l.info("Curent cutoff table Lookup: " + objectKey);
ChangeSetDecorator.Deser cs = new ChangeSetDecorator.Deser(val);
cs.key = objectKey;
cs.itemId = deserCh.getChanges().getItemId();
ChangeSetDecorator.Deser mergedVal = createObjChangeSets(cs, deserCh, cutoffTime);
objKeyWithChangeSets.put(objectKey, mergedVal);
continue;
}
}
}
/**
* Step: 3 # Fall back to last cutoff table as does not exist in
* current cut off table.
*/
Result resultO = lastCutoffTable.get(getter);
if ( null != resultO) {
byte[] val = resultO.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
int valSize = ( null == val) ? 0 : val.length;
if ( valSize == 0 ) val = null;
if ( null != val ) {
if ( INFO_ENABLED) FileSyncLog.l.info("Previous cutoff table Lookup: " + objectKey);
ChangeSetDecorator.Deser cs = new ChangeSetDecorator.Deser(val);
cs.key = objectKey;
cs.itemId = deserCh.getChanges().getItemId();
ChangeSetDecorator.Deser mergedVal = createObjChangeSets(cs, deserCh, cutoffTime);
objKeyWithChangeSets.put(objectKey, mergedVal);
continue;
}
}
/**
* We didn't find in current or last cutoff table.
* It is a fresh change. Boot strap the changes
*/
if ( INFO_ENABLED) FileSyncLog.l.info("Bootstrapping: " + objectKey);
ChangeSetDecorator.Deser none = new ChangeSetDecorator.Deser(null);
none.key = objectKey;
none.itemId = deserCh.getChanges().getItemId();
ChangeSetDecorator.Deser mergedVal = createObjChangeSets(none, deserCh, -1);
objKeyWithChangeSets.put(objectKey, mergedVal);
}
if ( objKeyWithChangeSets.size() >= 0 ) {
save(currentCutoffTable, lastCutoffTable,
objKeyWithChangeSets, uniqueCotainerKeyWithObjectKeys, cutoffTime, putL);
}
/**
* Step: 4 # All sucess, move to next time stamp
*/
lastUpdatedTime = startTime; //maximum timestamp value, exclusive
lastUpdatedDt = currentUpdatedDt;
} catch (Exception ex) {
throw ex;
} finally {
if ( null != facade && null != currentCutoffTable) facade.putTable(currentCutoffTable);
if ( null != facade && null != lastCutoffTable) facade.putTable(lastCutoffTable);
}
long endTime = System.currentTimeMillis();
if ( MONITOR_JOB )
{
long timeTaken = (endTime - startTime);
MonitorCollector collector = new MonitorCollector();
collector.add(new MonitorMeasure("CacheBuildJob", timeTaken));
MonitorCacheBuildJob.getInstance().onExit(collector);
}
}
/**
*
* @param currentCutoffTable
* @param lastCutoffTable
* @param objKeyWithChangeSets
* @param uniqueProjectIdWithObjIds
* @param cutoffTime
* @param putL
* @throws IOException
*/
private void save(HTableWrapper currentCutoffTable, HTableWrapper lastCutoffTable,
Map<String, ChangeSetDecorator.Deser> objKeyWithChangeSets,
Map<String, String> uniqueProjectIdWithObjIds,
long cutoffTime, List<Put> putL)
throws IOException {
putL.clear();
for (String key : objKeyWithChangeSets.keySet()) {
ChangeSetDecorator.Deser val = objKeyWithChangeSets.get(key);
Put update = new Put(key.getBytes());
update.add(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME, val.data);
update.setDurability(Durability.SYNC_WAL);
putL.add(update);
}
currentCutoffTable.put(putL);
if ( INFO_ENABLED) FileSyncLog.l.info("Cutoff Table Objects Added - " + putL.size());
putL.clear();
saveContainer(currentCutoffTable, lastCutoffTable,
objKeyWithChangeSets, uniqueProjectIdWithObjIds, cutoffTime);
currentCutoffTable.flushCommits();
objKeyWithChangeSets.clear();
}
/**
*
* @param currentCutoffTable
* @param lastCutoffTable
* @param objKeyWithChangeSets
* @param uniqueProjectIdWithObjIds
* @param cutoffTime
* @throws IOException
*/
private void saveContainer(HTableWrapper currentCutoffTable,
HTableWrapper lastCutoffTable,
Map<String, ChangeSetDecorator.Deser> objKeyWithChangeSets,
Map<String, String> uniqueProjectIdWithObjIds, long cutoffTime)
throws IOException {
/**
* mergeContainerChanges for the current projects
*/
List<String> objKeyL = new ArrayList<String>();
Set<ChangeSetDecorator.Deser> containerObjects = new HashSet<ChangeSetDecorator.Deser>();
for (String projectId : uniqueProjectIdWithObjIds.keySet()) {
objKeyL.clear();
containerObjects.clear();
/**
* Find out all object Ids belonging to this project and in current set
*/
String objectKeys = uniqueProjectIdWithObjIds.get(projectId);
LineReaderUtil.fastSplit(objKeyL, objectKeys, SEPARATOR_OBJID);
for (String objKey : objKeyL) {
ChangeSetDecorator.Deser val = objKeyWithChangeSets.get(objKey);
if ( null != val) containerObjects.add( val);
}
if ( INFO_ENABLED) FileSyncLog.l.info( "projectId:" + projectId + " ,Objects =" + containerObjects.size());
byte[] projectIdB = projectId.getBytes();
Get containerId = new Get(projectIdB);
/**
* Look the changes in current cutoff table.
*/
byte[] containerCell = null;
Result res = currentCutoffTable.get(containerId);
if ( null != res) {
containerCell = res.getValue(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME);
}
/**
* The project changes are not available in current cutoff table.
*/
int containerCellSize = ( null == containerCell) ? 0 : containerCell.length;
if ( containerCellSize == 0 ) {
res = lastCutoffTable.get(containerId);
if ( null != res) {
containerCell = res.getValue(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME);
}
}
containerCellSize = ( null == containerCell) ? 0 : containerCell.length;
if ( containerCellSize == 0 ) containerCell = null;
/**
* Merge the data
*/
if ( INFO_ENABLED ) FileSyncLog.l.info("containerCell:" +
( (null == containerCell) ? 0 : containerCell.length) ) ;
byte[] containerCellUpdated = BytesMergerContainer.mergeContainerChangesD(
containerCell, containerObjects, cutoffTime, -1L);
if ( INFO_ENABLED ) FileSyncLog.l.info("containerCellUpdated:" +
( (null == containerCellUpdated) ? 0 : containerCellUpdated.length) ) ;
/**
* Save to current cutoff table
*/
Put containerUpdate = new Put(projectIdB);
containerUpdate.add(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME, containerCellUpdated);
containerUpdate.setDurability(Durability.SYNC_WAL);
currentCutoffTable.put(containerUpdate);
}
if ( INFO_ENABLED) FileSyncLog.l.info("Cutoff Table Containers Added - " + uniqueProjectIdWithObjIds.size());
}
/**
*
* @param existingCutoffBytes
* @param currentChanges
* @param cutoffTime
* @return
* @throws IOException
*/
public final ChangeSetDecorator.Deser createObjChangeSets(final ChangeSetDecorator.Deser existingCutoffBytes,
final ChangeDecorator.Deser currentChanges, final long cutoffTime) throws IOException {
byte[] data = BytesMergerObject.mergeObjectChanges(
existingCutoffBytes.data, currentChanges, cutoffTime, -1);
existingCutoffBytes.data = data;
return existingCutoffBytes;
}
/**
*
* @param tsTableCurrent
* @param startTime
* @param uniqueProjectIds
* @param uniqueObjIds
*/
public void addToTimeseriesChanges(String tsTableCurrent, long startTime,
Map<String, String> uniqueContainerKeyWithObjectKeys, List<ChangeDecorator.Deser> timeseriesChanges) {
HBaseFacade facade = null;
HTableWrapper table = null;
ResultScanner scanner = null;
try {
facade = HBaseFacade.getInstance();
table = facade.getTable(tsTableCurrent);
Scan scan = new Scan();
scan.setCaching(1024);
scan.setMaxVersions(1);
scan.setTimeRange(lastUpdatedTime, startTime);
scan = scan.addColumn(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
scanner = table.getScanner(scan);
StringBuilder keyBuilder = new StringBuilder();
int counter = 0;
for (Result r: scanner) {
if ( null == r) continue;
if ( r.isEmpty()) continue;
counter++;
if ( counter % 1000 == 0 ) FileSyncLog.l.info(tsTableCurrent + " read : " + counter);
byte[] changeB = r.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
int changeBSize = ( null == changeB) ? 0 : changeB.length;
if ( changeBSize == 0 ) continue;
if ( INFO_ENABLED) FileSyncLog.l.info("Inside AddToTimeSeries: changeB: "+changeB.toString());
ChangeDecorator.Deser currentChangeDeser = new ChangeDecorator.Deser(changeB);
Changes currentChange = currentChangeDeser.getChanges();
//Add to Unique Projects
String containerKey = ObjectKey.getContainerKey(keyBuilder,currentChange);
String objectKey = ObjectKey.getObjectKey(keyBuilder,currentChange);
currentChangeDeser.objectKey = objectKey;
if (uniqueContainerKeyWithObjectKeys.containsKey(containerKey)) {
uniqueContainerKeyWithObjectKeys.put(containerKey,
uniqueContainerKeyWithObjectKeys.get(containerKey) + SEPARATOR_OBJID + objectKey);
} else {
uniqueContainerKeyWithObjectKeys.put(containerKey, objectKey);
}
//Merge Actions of a Object.
timeseriesChanges.add(currentChangeDeser);
}
} catch (Exception e) {
FileSyncLog.l.fatal("Unable to execute daily update job. " ,e);
} finally {
if (null != scanner) {
try {scanner.close();} catch (Exception ex) {ex.printStackTrace();}
}
if (null != table) {
try {facade.putTable(table);} catch (Exception ex) {ex.printStackTrace();}
}
}
}
public static void main(String[] args) throws Exception {
CacheBuildJob.getInstance().run();
}
}
... View more
05-26-2016
04:30 PM
Here is the region server log on fsdata1c.corp.arc.com 2016-05-26 13:57:47,158 WARN [RpcServer.handler=55,port=60020] ipc.RpcServer: RpcServer.respondercallId: 4324 service: ClientService methodName: Scan size: 30 connection: 10.1.1.243:52740: output error
2016-05-26 13:57:47,159 WARN [RpcServer.handler=55,port=60020] ipc.RpcServer: RpcServer.handler=55,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
2016-05-26 13:58:47,135 INFO [regionserver60020.leaseChecker] regionserver.HRegionServer: Scanner 3235538737043012213 lease expired on region CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd.
2016-05-26 13:58:52,422 INFO [RpcServer.reader=8,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2224)
at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1415)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:790)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:581)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:556)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
2016-05-26 13:58:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12076, accesses=3683357, hits=3525415, hitRatio=95.71%, , cachingAccesses=3547447, cachingHits=3448937, cachingHitsRatio=97.22%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 13:59:52,420 INFO [regionserver60020.leaseChecker] regionserver.HRegionServer: Scanner 594209239597513333 lease expired on region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894.
2016-05-26 14:03:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3720308, hits=3562365, hitRatio=95.75%, , cachingAccesses=3584398, cachingHits=3485887, cachingHitsRatio=97.25%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:08:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3749373, hits=3591430, hitRatio=95.79%, , cachingAccesses=3613463, cachingHits=3514952, cachingHitsRatio=97.27%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:13:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3769032, hits=3611089, hitRatio=95.81%, , cachingAccesses=3633122, cachingHits=3534611, cachingHitsRatio=97.29%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:18:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3769844, hits=3611901, hitRatio=95.81%, , cachingAccesses=3633934, cachingHits=3535423, cachingHitsRatio=97.29%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:23:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3831804, hits=3673861, hitRatio=95.88%, , cachingAccesses=3695894, cachingHits=3597383, cachingHitsRatio=97.33%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:28:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3832074, hits=3674131, hitRatio=95.88%, , cachingAccesses=3696164, cachingHits=3597653, cachingHitsRatio=97.33%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:33:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3844554, hits=3686611, hitRatio=95.89%, , cachingAccesses=3708644, cachingHits=3610133, cachingHitsRatio=97.34%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:38:11,712 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894. after a delay of 15478
2016-05-26 14:38:21,712 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894. after a delay of 19133
2016-05-26 14:38:27,190 DEBUG [Thread-20] regionserver.HRegion: Started memstore flush for CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894., current region memstore size 93.2 M
2016-05-26 14:38:27,401 INFO [Thread-20] regionserver.DefaultStoreFlusher: Flushed, sequenceid=96047, memsize=31.7 M, hasBloomFilter=true, into tmp file hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/.tmp/6d1e9fe6186a448f9a322c73ecf4ad0a
2016-05-26 14:38:27,411 DEBUG [Thread-20] regionserver.HRegionFileSystem: Committing store file hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/.tmp/6d1e9fe6186a448f9a322c73ecf4ad0a as hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/1/6d1e9fe6186a448f9a322c73ecf4ad0a
... View more
05-26-2016
04:25 PM
Thanks. Restarting App timeline server worked for me.
... View more
05-26-2016
04:14 PM
hbase-error.txtUnable to execute hbase job due exception- 2016-05-26 15:49:38,270 WARN [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,] Thu May 26 15:49:38 UTC 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@77e92d1b, java.io.IOException: Unexpected closed connection at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:811)
at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:394)
at com.bizosys.hsearch.hbase.HTableWrapper.get(HTableWrapper.java:100)
at com.arc.hbase.jobs.CacheBuildJob.saveContainer(CacheBuildJob.java:387)
at com.arc.hbase.jobs.CacheBuildJob.save(CacheBuildJob.java:323)
at com.arc.hbase.jobs.CacheBuildJob.exec(CacheBuildJob.java:172)
at com.arc.hbase.jobs.CacheBuildJob.run(CacheBuildJob.java:77)
at com.arc.hbase.jobs.CacheBuildJob.main(CacheBuildJob.java:513) Attached is full error log.
... View more
Labels:
- Labels:
-
Apache HBase
05-26-2016
12:53 PM
Hi @Pierre Villard , I restarted App TimeLine Server and ResourceManager and that solved the problem. however, here is log of yarn node manager. 2016-05-26 05:47:55,939 INFO yarn.YarnShuffleService (YarnShuffleService.java:initializeContainer(183)) - Initializing container container_1464255636652_0012_01_000001
2016-05-26 05:47:55,943 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,943 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-server-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,943 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/netty-all-4.0.23.Final.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/protobuf-java-2.5.0.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hadoop-common-2.7.1.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/htrace-core-3.1.0-incubating.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-client-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/metrics-core-2.2.0.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/zookeeper-3.4.6.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-protocol-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-common-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-hadoop-compat-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/guava-12.0.1.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.split transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.splitmetainfo transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.xml transitioned from INIT to DOWNLOADING
2016-05-26 05:47:55,944 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:handle(711)) - Created localizer for container_1464255636652_0012_01_000001
2016-05-26 05:47:55,945 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:writeCredentials(1191)) - Writing credentials to the nmPrivate file /hadoop/hdfs1/hadoop/yarn/local/nmPrivate/container_1464255636652_0012_01_000001.tokens. Credentials list:
2016-05-26 05:47:55,967 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:createUserCacheDirs(610)) - Initializing user hdfs
2016-05-26 05:47:55,968 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:startLocalizer(117)) - Copying from /hadoop/hdfs1/hadoop/yarn/local/nmPrivate/container_1464255636652_0012_01_000001.tokens to /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000001.tokens
2016-05-26 05:47:55,968 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:startLocalizer(124)) - Localizer CWD set to /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012 = file:/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012
2016-05-26 05:47:56,008 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4052/hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,033 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-server-1.1.2.2.4.0.0-169.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4053/hbase-server-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,052 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/netty-all-4.0.23.Final.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4054/netty-all-4.0.23.Final.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,068 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/protobuf-java-2.5.0.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4055/protobuf-java-2.5.0.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,093 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hadoop-common-2.7.1.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4056/hadoop-common-2.7.1.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,111 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/htrace-core-3.1.0-incubating.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4057/htrace-core-3.1.0-incubating.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,135 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-client-1.1.2.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4058/hbase-client-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,150 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/metrics-core-2.2.0.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4059/metrics-core-2.2.0.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,167 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/zookeeper-3.4.6.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4060/zookeeper-3.4.6.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,192 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-protocol-1.1.2.2.4.0.0-169.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4061/hbase-protocol-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,208 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-common-1.1.2.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4062/hbase-common-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,223 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/hbase-hadoop-compat-1.1.2.2.4.0.0-169.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4063/hbase-hadoop-compat-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,242 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/libjars/guava-12.0.1.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4064/guava-12.0.1.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,260 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.split(->/var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/filecache/10/job.split) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,276 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.splitmetainfo(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/filecache/11/job.splitmetainfo) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,302 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/filecache/12/job.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,318 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0012/job.xml(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/filecache/13/job.xml) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:47:56,318 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000001 transitioned from LOCALIZING to LOCALIZED
2016-05-26 05:47:56,336 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000001 transitioned from LOCALIZED to RUNNING
2016-05-26 05:47:56,338 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:buildCommandExecutor(268)) - launchContainer: [bash, /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000001/default_container_executor.sh]
2016-05-26 05:47:57,647 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(375)) - Starting resource-monitoring for container_1464255636652_0012_01_000001
2016-05-26 05:47:57,657 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 25938 for container-id container_1464255636652_0012_01_000001: 142.0 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used
2016-05-26 05:48:00,686 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 25938 for container-id container_1464255636652_0012_01_000001: 341.4 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used
2016-05-26 05:48:03,324 INFO ipc.Server (Server.java:saslProcess(1386)) - Auth successful for appattempt_1464255636652_0012_000001 (auth:SIMPLE)
2016-05-26 05:48:03,330 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:startContainerInternal(816)) - Start request for container_1464255636652_0012_01_000002 by user hdfs
2016-05-26 05:48:03,331 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs IP=10.1.10.204 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1464255636652_0012 CONTAINERID=container_1464255636652_0012_01_000002
2016-05-26 05:48:03,331 INFO application.ApplicationImpl (ApplicationImpl.java:transition(304)) - Adding container_1464255636652_0012_01_000002 to application application_1464255636652_0012
2016-05-26 05:48:03,331 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000002 transitioned from NEW to LOCALIZING
2016-05-26 05:48:03,332 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event CONTAINER_INIT for appId application_1464255636652_0012
2016-05-26 05:48:03,332 INFO yarn.YarnShuffleService (YarnShuffleService.java:initializeContainer(183)) - Initializing container container_1464255636652_0012_01_000002
2016-05-26 05:48:03,332 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event APPLICATION_INIT for appId application_1464255636652_0012
2016-05-26 05:48:03,332 INFO containermanager.AuxServices (AuxServices.java:handle(200)) - Got APPLICATION_INIT for service mapreduce_shuffle
2016-05-26 05:48:03,332 INFO mapred.ShuffleHandler (ShuffleHandler.java:addJobToken(671)) - Added token for job_1464255636652_0012
2016-05-26 05:48:03,332 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/job.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:03,332 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/job.xml transitioned from INIT to DOWNLOADING
2016-05-26 05:48:03,333 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:handle(711)) - Created localizer for container_1464255636652_0012_01_000002
2016-05-26 05:48:03,334 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:writeCredentials(1191)) - Writing credentials to the nmPrivate file /hadoop/hdfs1/hadoop/yarn/local/nmPrivate/container_1464255636652_0012_01_000002.tokens. Credentials list:
2016-05-26 05:48:03,380 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:createUserCacheDirs(610)) - Initializing user hdfs
2016-05-26 05:48:03,381 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:startLocalizer(117)) - Copying from /hadoop/hdfs1/hadoop/yarn/local/nmPrivate/container_1464255636652_0012_01_000002.tokens to /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000002.tokens
2016-05-26 05:48:03,381 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:startLocalizer(124)) - Localizer CWD set to /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012 = file:/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012
2016-05-26 05:48:03,435 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/job.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/filecache/14/job.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:03,456 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0012/job.xml(->/var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/filecache/15/job.xml) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:03,456 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000002 transitioned from LOCALIZING to LOCALIZED
2016-05-26 05:48:03,478 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000002 transitioned from LOCALIZED to RUNNING
2016-05-26 05:48:03,481 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:buildCommandExecutor(268)) - launchContainer: [bash, /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000002/default_container_executor.sh]
2016-05-26 05:48:03,686 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(375)) - Starting resource-monitoring for container_1464255636652_0012_01_000002
2016-05-26 05:48:03,701 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 25938 for container-id container_1464255636652_0012_01_000001: 395.1 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:03,723 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 26129 for container-id container_1464255636652_0012_01_000002: 43.3 MB of 1 GB physical memory used; 2.5 GB of 2.1 GB virtual memory used
2016-05-26 05:48:06,474 INFO ipc.Server (Server.java:saslProcess(1386)) - Auth successful for appattempt_1464255636652_0012_000001 (auth:SIMPLE)
2016-05-26 05:48:06,478 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:stopContainerInternal(966)) - Stopping container with container Id: container_1464255636652_0012_01_000002
2016-05-26 05:48:06,478 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs IP=10.1.10.204 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1464255636652_0012 CONTAINERID=container_1464255636652_0012_01_000002
2016-05-26 05:48:06,478 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000002 transitioned from RUNNING to KILLING
2016-05-26 05:48:06,478 INFO launcher.ContainerLaunch (ContainerLaunch.java:cleanupContainer(371)) - Cleaning up container container_1464255636652_0012_01_000002
2016-05-26 05:48:06,502 WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(224)) - Exit code from container container_1464255636652_0012_01_000002 is : 143
2016-05-26 05:48:06,509 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000002 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2016-05-26 05:48:06,509 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1464255636652_0012 CONTAINERID=container_1464255636652_0012_01_000002
2016-05-26 05:48:06,510 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000002
2016-05-26 05:48:06,510 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000002
2016-05-26 05:48:06,511 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000002 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2016-05-26 05:48:06,511 INFO application.ApplicationImpl (ApplicationImpl.java:transition(347)) - Removing container_1464255636652_0012_01_000002 from application application_1464255636652_0012
2016-05-26 05:48:06,511 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:startContainerLogAggregation(547)) - Considering container container_1464255636652_0012_01_000002 for log-aggregation
2016-05-26 05:48:06,511 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event CONTAINER_STOP for appId application_1464255636652_0012
2016-05-26 05:48:06,511 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopContainer(189)) - Stopping container container_1464255636652_0012_01_000002
2016-05-26 05:48:06,724 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(390)) - Stopping resource-monitoring for container_1464255636652_0012_01_000002
2016-05-26 05:48:06,733 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 25938 for container-id container_1464255636652_0012_01_000001: 411.9 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:09,746 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 25938 for container-id container_1464255636652_0012_01_000001: 412.8 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:12,755 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 25938 for container-id container_1464255636652_0012_01_000001: 412.8 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:12,912 INFO launcher.ContainerLaunch (ContainerLaunch.java:call(347)) - Container container_1464255636652_0012_01_000001 succeeded
2016-05-26 05:48:12,913 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2016-05-26 05:48:12,913 INFO launcher.ContainerLaunch (ContainerLaunch.java:cleanupContainer(371)) - Cleaning up container container_1464255636652_0012_01_000001
2016-05-26 05:48:12,935 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000001
2016-05-26 05:48:12,935 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012/container_1464255636652_0012_01_000001
2016-05-26 05:48:12,935 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs OPERATION=Container Finished - SucceededTARGET=ContainerImpl RESULT=SUCCESS APPID=application_1464255636652_0012 CONTAINERID=container_1464255636652_0012_01_000001
2016-05-26 05:48:12,936 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0012_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2016-05-26 05:48:12,936 INFO application.ApplicationImpl (ApplicationImpl.java:transition(347)) - Removing container_1464255636652_0012_01_000001 from application application_1464255636652_0012
2016-05-26 05:48:12,936 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:startContainerLogAggregation(547)) - Considering container container_1464255636652_0012_01_000001 for log-aggregation
2016-05-26 05:48:12,936 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event CONTAINER_STOP for appId application_1464255636652_0012
2016-05-26 05:48:12,936 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopContainer(189)) - Stopping container container_1464255636652_0012_01_000001
2016-05-26 05:48:13,495 INFO ipc.Server (Server.java:saslProcess(1386)) - Auth successful for appattempt_1464255636652_0012_000001 (auth:SIMPLE)
2016-05-26 05:48:13,499 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:stopContainerInternal(966)) - Stopping container with container Id: container_1464255636652_0012_01_000001
2016-05-26 05:48:13,500 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs IP=10.1.10.20 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1464255636652_0012 CONTAINERID=container_1464255636652_0012_01_000001
2016-05-26 05:48:13,500 INFO nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:removeOrTrackCompletedContainersFromContext(529)) - Removed completed containers from NM context: [container_1464255636652_0012_01_000001]
2016-05-26 05:48:13,501 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application application_1464255636652_0012 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2016-05-26 05:48:13,501 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event APPLICATION_STOP for appId application_1464255636652_0012
2016-05-26 05:48:13,501 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopApplication(170)) - Stopping application application_1464255636652_0012
2016-05-26 05:48:13,501 INFO shuffle.ExternalShuffleBlockResolver (ExternalShuffleBlockResolver.java:applicationRemoved(206)) - Application application_1464255636652_0012 removed, cleanupLocalDirs = false
2016-05-26 05:48:13,501 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application application_1464255636652_0012 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2016-05-26 05:48:13,501 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:finishLogAggregation(555)) - Application just finished : application_1464255636652_0012
2016-05-26 05:48:13,502 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012
2016-05-26 05:48:13,503 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0012
2016-05-26 05:48:13,513 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:doContainerLogAggregation(602)) - Uploading logs for container container_1464255636652_0012_01_000001. Current good log dirs are /var/log/hadoop/yarn/log,/hadoop/hdfs1/hadoop/yarn/log
2016-05-26 05:48:13,514 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:doContainerLogAggregation(602)) - Uploading logs for container container_1464255636652_0012_01_000002. Current good log dirs are /var/log/hadoop/yarn/log,/hadoop/hdfs1/hadoop/yarn/log
2016-05-26 05:48:13,514 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000001/directory.info
2016-05-26 05:48:13,514 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000001/launch_container.sh
2016-05-26 05:48:13,514 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000001/stderr
2016-05-26 05:48:13,515 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000001/stdout
2016-05-26 05:48:13,515 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000001/syslog
2016-05-26 05:48:13,516 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000002/stderr
2016-05-26 05:48:13,516 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000002/directory.info
2016-05-26 05:48:13,516 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000002/syslog
2016-05-26 05:48:13,516 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000002/launch_container.sh
2016-05-26 05:48:13,516 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0012/container_1464255636652_0012_01_000002/stdout
2016-05-26 05:48:13,550 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0012
2016-05-26 05:48:13,551 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0012
2016-05-26 05:48:14,520 INFO ipc.Server (Server.java:saslProcess(1386)) - Auth successful for appattempt_1464255636652_0013_000001 (auth:SIMPLE)
2016-05-26 05:48:14,522 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:startContainerInternal(816)) - Start request for container_1464255636652_0013_01_000001 by user hdfs
2016-05-26 05:48:14,523 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:startContainerInternal(856)) - Creating a new application reference for app application_1464255636652_0013
2016-05-26 05:48:14,523 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application application_1464255636652_0013 transitioned from NEW to INITING
2016-05-26 05:48:14,523 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs IP=10.1.10.20 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1464255636652_0013 CONTAINERID=container_1464255636652_0013_01_000001
2016-05-26 05:48:14,524 WARN logaggregation.LogAggregationService (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple users.
2016-05-26 05:48:14,525 WARN logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:<init>(190)) - rollingMonitorInterval is set as -1. The log rolling mornitoring interval is disabled. The logs will be aggregated after this application is finished.
2016-05-26 05:48:14,543 INFO application.ApplicationImpl (ApplicationImpl.java:transition(304)) - Adding container_1464255636652_0013_01_000001 to application application_1464255636652_0013
2016-05-26 05:48:14,543 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application application_1464255636652_0013 transitioned from INITING to RUNNING
2016-05-26 05:48:14,543 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0013_01_000001 transitioned from NEW to LOCALIZING
2016-05-26 05:48:14,544 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event CONTAINER_INIT for appId application_1464255636652_0013
2016-05-26 05:48:14,544 INFO yarn.YarnShuffleService (YarnShuffleService.java:initializeContainer(183)) - Initializing container container_1464255636652_0013_01_000001
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-server-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/netty-all-4.0.23.Final.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/protobuf-java-2.5.0.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hadoop-common-2.7.1.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/htrace-core-3.1.0-incubating.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-client-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/metrics-core-2.2.0.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/zookeeper-3.4.6.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-protocol-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-common-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-hadoop-compat-1.1.2.2.4.0.0-169.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/guava-12.0.1.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.split transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.splitmetainfo transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.jar transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.xml transitioned from INIT to DOWNLOADING
2016-05-26 05:48:14,544 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:handle(711)) - Created localizer for container_1464255636652_0013_01_000001
2016-05-26 05:48:14,545 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:writeCredentials(1191)) - Writing credentials to the nmPrivate file /hadoop/hdfs1/hadoop/yarn/local/nmPrivate/container_1464255636652_0013_01_000001.tokens. Credentials list:
2016-05-26 05:48:14,568 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:createUserCacheDirs(610)) - Initializing user hdfs
2016-05-26 05:48:14,569 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:startLocalizer(117)) - Copying from /hadoop/hdfs1/hadoop/yarn/local/nmPrivate/container_1464255636652_0013_01_000001.tokens to /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/container_1464255636652_0013_01_000001.tokens
2016-05-26 05:48:14,569 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:startLocalizer(124)) - Localizer CWD set to /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013 = file:/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013
2016-05-26 05:48:14,613 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4065/hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,638 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-server-1.1.2.2.4.0.0-169.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4066/hbase-server-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,657 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/netty-all-4.0.23.Final.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4067/netty-all-4.0.23.Final.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,675 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/protobuf-java-2.5.0.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4068/protobuf-java-2.5.0.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,698 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hadoop-common-2.7.1.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4069/hadoop-common-2.7.1.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,716 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/htrace-core-3.1.0-incubating.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4070/htrace-core-3.1.0-incubating.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,734 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-client-1.1.2.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4071/hbase-client-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,748 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/metrics-core-2.2.0.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4072/metrics-core-2.2.0.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,765 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/zookeeper-3.4.6.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4073/zookeeper-3.4.6.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,790 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-protocol-1.1.2.2.4.0.0-169.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4074/hbase-protocol-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,810 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-common-1.1.2.2.4.0.0-169.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4075/hbase-common-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,828 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/hbase-hadoop-compat-1.1.2.2.4.0.0-169.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/filecache/4076/hbase-hadoop-compat-1.1.2.2.4.0.0-169.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,847 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster/user/hdfs/.staging/job_1464255636652_0013/libjars/guava-12.0.1.jar(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/filecache/4077/guava-12.0.1.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,864 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.split(->/var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/filecache/10/job.split) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,880 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.splitmetainfo(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/filecache/11/job.splitmetainfo) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,912 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.jar(->/var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/filecache/12/job.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,930 INFO localizer.LocalizedResource (LocalizedResource.java:handle(203)) - Resource hdfs://aimprodcluster:8020/user/hdfs/.staging/job_1464255636652_0013/job.xml(->/hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/filecache/13/job.xml) transitioned from DOWNLOADING to LOCALIZED
2016-05-26 05:48:14,930 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0013_01_000001 transitioned from LOCALIZING to LOCALIZED
2016-05-26 05:48:14,955 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0013_01_000001 transitioned from LOCALIZED to RUNNING
2016-05-26 05:48:14,959 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:buildCommandExecutor(268)) - launchContainer: [bash, /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/container_1464255636652_0013_01_000001/default_container_executor.sh]
2016-05-26 05:48:15,757 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(375)) - Starting resource-monitoring for container_1464255636652_0013_01_000001
2016-05-26 05:48:15,757 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(390)) - Stopping resource-monitoring for container_1464255636652_0012_01_000001
2016-05-26 05:48:15,787 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 26444 for container-id container_1464255636652_0013_01_000001: 108.4 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used
2016-05-26 05:48:18,801 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 26444 for container-id container_1464255636652_0013_01_000001: 316.7 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used
2016-05-26 05:48:21,809 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 26444 for container-id container_1464255636652_0013_01_000001: 349.4 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:24,823 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 26444 for container-id container_1464255636652_0013_01_000001: 384.5 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:27,832 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 26444 for container-id container_1464255636652_0013_01_000001: 410.3 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:30,841 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 26444 for container-id container_1464255636652_0013_01_000001: 410.3 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used
2016-05-26 05:48:32,057 INFO launcher.ContainerLaunch (ContainerLaunch.java:call(347)) - Container container_1464255636652_0013_01_000001 succeeded
2016-05-26 05:48:32,057 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0013_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2016-05-26 05:48:32,057 INFO launcher.ContainerLaunch (ContainerLaunch.java:cleanupContainer(371)) - Cleaning up container container_1464255636652_0013_01_000001
2016-05-26 05:48:32,079 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/container_1464255636652_0013_01_000001
2016-05-26 05:48:32,079 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013/container_1464255636652_0013_01_000001
2016-05-26 05:48:32,079 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs OPERATION=Container Finished - SucceededTARGET=ContainerImpl RESULT=SUCCESS APPID=application_1464255636652_0013 CONTAINERID=container_1464255636652_0013_01_000001
2016-05-26 05:48:32,080 INFO container.ContainerImpl (ContainerImpl.java:handle(1131)) - Container container_1464255636652_0013_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2016-05-26 05:48:32,080 INFO application.ApplicationImpl (ApplicationImpl.java:transition(347)) - Removing container_1464255636652_0013_01_000001 from application application_1464255636652_0013
2016-05-26 05:48:32,080 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:startContainerLogAggregation(547)) - Considering container container_1464255636652_0013_01_000001 for log-aggregation
2016-05-26 05:48:32,080 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event CONTAINER_STOP for appId application_1464255636652_0013
2016-05-26 05:48:32,080 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopContainer(189)) - Stopping container container_1464255636652_0013_01_000001
2016-05-26 05:48:32,550 INFO ipc.Server (Server.java:saslProcess(1386)) - Auth successful for appattempt_1464255636652_0013_000001 (auth:SIMPLE)
2016-05-26 05:48:32,553 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:stopContainerInternal(966)) - Stopping container with container Id: container_1464255636652_0013_01_000001
2016-05-26 05:48:32,553 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=hdfs IP=10.1.10.20 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1464255636652_0013 CONTAINERID=container_1464255636652_0013_01_000001
2016-05-26 05:48:32,556 INFO nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:removeOrTrackCompletedContainersFromContext(529)) - Removed completed containers from NM context: [container_1464255636652_0013_01_000001]
2016-05-26 05:48:32,556 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application application_1464255636652_0013 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2016-05-26 05:48:32,557 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /var/log/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013
2016-05-26 05:48:32,557 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(458)) - Deleting absolute path : /hadoop/hdfs1/hadoop/yarn/local/usercache/hdfs/appcache/application_1464255636652_0013
2016-05-26 05:48:32,557 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event APPLICATION_STOP for appId application_1464255636652_0013
2016-05-26 05:48:32,557 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopApplication(170)) - Stopping application application_1464255636652_0013
2016-05-26 05:48:32,557 INFO shuffle.ExternalShuffleBlockResolver (ExternalShuffleBlockResolver.java:applicationRemoved(206)) - Application application_1464255636652_0013 removed, cleanupLocalDirs = false
2016-05-26 05:48:32,557 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application application_1464255636652_0013 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2016-05-26 05:48:32,557 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:finishLogAggregation(555)) - Application just finished : application_1464255636652_0013
2016-05-26 05:48:32,569 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:doContainerLogAggregation(602)) - Uploading logs for container container_1464255636652_0013_01_000001. Current good log dirs are /var/log/hadoop/yarn/log,/hadoop/hdfs1/hadoop/yarn/log
2016-05-26 05:48:32,569 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0013/container_1464255636652_0013_01_000001/stdout
2016-05-26 05:48:32,570 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0013/container_1464255636652_0013_01_000001/launch_container.sh
2016-05-26 05:48:32,570 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0013/container_1464255636652_0013_01_000001/directory.info
2016-05-26 05:48:32,570 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0013/container_1464255636652_0013_01_000001/syslog
2016-05-26 05:48:32,570 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0013/container_1464255636652_0013_01_000001/stderr
2016-05-26 05:48:32,606 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /var/log/hadoop/yarn/log/application_1464255636652_0013
2016-05-26 05:48:32,607 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(467)) - Deleting path : /hadoop/hdfs1/hadoop/yarn/log/application_1464255636652_0013
2016-05-26 05:48:33,841 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(390)) - Stopping resource-monitoring for container_1464255636652_0013_01_000001
Thanks Raja ray
... View more
05-26-2016
09:30 AM
I have 5 node cluster. 2 master and 3 datanodes. all machine has 16GB memory. also there are almost 6GB free memory available on each machine. I have a requirement to export a hbase table to hdfs. and hence i am running the following command.
hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot CUTOFF1-SNAPSHOT -copy-to hdfs://prodcluster/hbase-export
After issuing the above command i am getting the following error. [main] impl.YarnClientImpl: Application submission is not finished, submitted application application_1459775087681_0596 is still in NEW_SAVING and the above line is getting printed in console at every seconds. Please help!!
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
Apache YARN
04-12-2016
06:14 AM
Josh, thanks a lot. I restarted the cluster. It worked. Probably the issue was happening for slow start.
... View more
04-04-2016
03:04 PM
Hi Josh, Below is the log of master3.corp.mirrorplus.com- 2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-327.3.1.el7.x86_64
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hbase
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2016-04-04 05:47:12,159 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2016-04-04 05:47:12,160 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181 sessionTimeout=90000 watcher=master:160000x0, quorum=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181, baseZNode=/hbase-unsecure
2016-04-04 05:47:12,185 INFO [main-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server master1.corp.mirrorplus.com/10.1.1.94:2181. Will not attempt to authenticate using SASL (unknown error)
2016-04-04 05:47:12,192 INFO [main-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Socket connection established to master1.corp.mirrorplus.com/10.1.1.94:2181, initiating session
2016-04-04 05:47:12,201 INFO [main-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server master1.corp.mirrorplus.com/10.1.1.94:2181, sessionid = 0x153e0a3fe360007, negotiated timeout = 40000
2016-04-04 05:47:12,247 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2016-04-04 05:47:12,247 INFO [RpcServer.listener,port=16000] ipc.RpcServer: RpcServer.listener,port=16000: starting
2016-04-04 05:47:12,304 INFO [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-04-04 05:47:12,307 INFO [main] http.HttpRequestLog: Http request log for http.requests.master is not defined
2016-04-04 05:47:12,317 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2016-04-04 05:47:12,320 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2016-04-04 05:47:12,320 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-04-04 05:47:12,320 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-04-04 05:47:12,335 INFO [main] http.HttpServer: Jetty bound to port 16010
2016-04-04 05:47:12,335 INFO [main] mortbay.log: jetty-6.1.26.hwx
2016-04-04 05:47:12,693 INFO [main] mortbay.log: Started SelectChannelConnector@0.0.0.0:16010
2016-04-04 05:47:12,696 INFO [main] master.HMaster: hbase.rootdir=hdfs://aimprodcluster/apps/hbase/data1, hbase.cluster.distributed=true
2016-04-04 05:47:12,707 INFO [main] master.HMaster: Adding backup master ZNode /hbase-unsecure/backup-masters/master3.corp.mirrorplus.com,16000,1459763231138
2016-04-04 05:47:12,807 INFO [master3:16000.activeMasterManager] master.ActiveMasterManager: Another master is the active master, master2.corp.mirrorplus.com,16000,1459763227559; waiting to become the next active master
2016-04-04 05:47:12,827 INFO [master/master3.corp.mirrorplus.com/10.1.10.20:16000] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x79c1e5c2 connecting to ZooKeeper ensemble=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181
2016-04-04 05:47:12,827 INFO [master/master3.corp.mirrorplus.com/10.1.10.20:16000] zookeeper.ZooKeeper: Initiating client connection, connectString=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181 sessionTimeout=90000 watcher=hconnection-0x79c1e5c20x0, quorum=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181, baseZNode=/hbase-unsecure
2016-04-04 05:47:12,834 INFO [master/master3.corp.mirrorplus.com/10.1.10.20:16000-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server master1.corp.mirrorplus.com/10.1.1.94:2181. Will not attempt to authenticate using SASL (unknown error)
2016-04-04 05:47:12,835 INFO [master/master3.corp.mirrorplus.com/10.1.10.20:16000-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Socket connection established to master1.corp.mirrorplus.com/10.1.1.94:2181, initiating session
2016-04-04 05:47:12,839 INFO [master/master3.corp.mirrorplus.com/10.1.10.20:16000-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server master1.corp.mirrorplus.com/10.1.1.94:2181, sessionid = 0x153e0a3fe360008, negotiated timeout = 40000
2016-04-04 05:47:12,864 INFO [master/master3.corp.mirrorplus.com/10.1.10.20:16000] regionserver.HRegionServer: ClusterId : a330e3a3-4f5c-402c-b934-013315c0b547
... View more
04-04-2016
03:02 PM
Hi Josh, I changed standby and active masters, below is the log of master2.corp.mirrorplus.com- 2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-327.3.1.el7.x86_64
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hbase
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2016-04-04 05:47:08,529 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2016-04-04 05:47:08,530 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181 sessionTimeout=90000 watcher=master:160000x0, quorum=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181, baseZNode=/hbase-unsecure
2016-04-04 05:47:08,575 INFO [main-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server master1.corp.mirrorplus.com/10.1.1.94:2181. Will not attempt to authenticate using SASL (unknown error)
2016-04-04 05:47:08,581 INFO [main-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Socket connection established to master1.corp.mirrorplus.com/10.1.1.94:2181, initiating session
2016-04-04 05:47:08,589 INFO [main-SendThread(master1.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server master1.corp.mirrorplus.com/10.1.1.94:2181, sessionid = 0x153e0a3fe360005, negotiated timeout = 40000
2016-04-04 05:47:08,648 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2016-04-04 05:47:08,648 INFO [RpcServer.listener,port=16000] ipc.RpcServer: RpcServer.listener,port=16000: starting
2016-04-04 05:47:08,710 INFO [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-04-04 05:47:08,716 INFO [main] http.HttpRequestLog: Http request log for http.requests.master is not defined
2016-04-04 05:47:08,730 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2016-04-04 05:47:08,733 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2016-04-04 05:47:08,733 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-04-04 05:47:08,733 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-04-04 05:47:08,753 INFO [main] http.HttpServer: Jetty bound to port 16010
2016-04-04 05:47:08,753 INFO [main] mortbay.log: jetty-6.1.26.hwx
2016-04-04 05:47:09,146 INFO [main] mortbay.log: Started SelectChannelConnector@0.0.0.0:16010
2016-04-04 05:47:09,151 INFO [main] master.HMaster: hbase.rootdir=hdfs://aimprodcluster/apps/hbase/data1, hbase.cluster.distributed=true
2016-04-04 05:47:09,167 INFO [main] master.HMaster: Adding backup master ZNode /hbase-unsecure/backup-masters/master2.corp.mirrorplus.com,16000,1459763227559
2016-04-04 05:47:09,270 INFO [master2:16000.activeMasterManager] master.ActiveMasterManager: Another master is the active master, master3.corp.mirrorplus.com,16000,1459762844022; waiting to become the next active master
2016-04-04 05:47:09,307 INFO [master/master2.corp.mirrorplus.com/10.1.1.95:16000] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7063e847 connecting to ZooKeeper ensemble=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181
2016-04-04 05:47:09,307 INFO [master/master2.corp.mirrorplus.com/10.1.1.95:16000] zookeeper.ZooKeeper: Initiating client connection, connectString=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181 sessionTimeout=90000 watcher=hconnection-0x7063e8470x0, quorum=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181, baseZNode=/hbase-unsecure
2016-04-04 05:47:09,308 INFO [master/master2.corp.mirrorplus.com/10.1.1.95:16000-SendThread(master2.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server master2.corp.mirrorplus.com/10.1.1.95:2181. Will not attempt to authenticate using SASL (unknown error)
2016-04-04 05:47:09,308 INFO [master/master2.corp.mirrorplus.com/10.1.1.95:16000-SendThread(master2.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Socket connection established to master2.corp.mirrorplus.com/10.1.1.95:2181, initiating session
2016-04-04 05:47:09,314 INFO [master/master2.corp.mirrorplus.com/10.1.1.95:16000-SendThread(master2.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server master2.corp.mirrorplus.com/10.1.1.95:2181, sessionid = 0x253e0a3fe3e0008, negotiated timeout = 40000
2016-04-04 05:47:09,338 INFO [master/master2.corp.mirrorplus.com/10.1.1.95:16000] regionserver.HRegionServer: ClusterId : 562355a3-3569-4d2e-bf16-efaa82632c96
2016-04-04 05:47:09,437 INFO [master2:16000.activeMasterManager] master.ActiveMasterManager: Deleting ZNode for /hbase-unsecure/backup-masters/master2.corp.mirrorplus.com,16000,1459763227559 from backup master directory
2016-04-04 05:47:09,444 INFO [master2:16000.activeMasterManager] master.ActiveMasterManager: Registered Active Master=master2.corp.mirrorplus.com,16000,1459763227559
2016-04-04 05:47:09,833 INFO [master2:16000.activeMasterManager] util.FSUtils: Created version file at hdfs://aimprodcluster/apps/hbase/data1 with version=8
2016-04-04 05:47:09,934 INFO [master2:16000.activeMasterManager] master.MasterFileSystem: BOOTSTRAP: creating hbase:meta region
2016-04-04 05:47:09,937 INFO [master2:16000.activeMasterManager] regionserver.HRegion: creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', BLOOMFILTER => 'NONE', VERSIONS => '10', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'false', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} RootDir = hdfs://aimprodcluster/apps/hbase/data1 Table name == hbase:meta
2016-04-04 05:47:10,010 INFO [master2:16000.activeMasterManager] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-04-04 05:47:10,035 INFO [master2:16000.activeMasterManager] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hregion-44984721.default, suffix=, logDir=hdfs://aimprodcluster/apps/hbase/data1/WALs/hregion-44984721, archiveDir=hdfs://aimprodcluster/apps/hbase/data1/oldWALs
2016-04-04 05:47:10,085 INFO [master2:16000.activeMasterManager] wal.FSHLog: Slow sync cost: 33 ms, current pipeline: []
2016-04-04 05:47:10,087 INFO [master2:16000.activeMasterManager] wal.FSHLog: New WAL /apps/hbase/data1/WALs/hregion-44984721/hregion-44984721.default.1459763230035
2016-04-04 05:47:10,182 INFO [StoreOpener-1588230740-1] hfile.CacheConfig: Allocating LruBlockCache size=5.99 GB, blockSize=64 KB
... View more
04-04-2016
11:41 AM
below is hbase master log- 2016-04-04 05:47:05,382 INFO [main] zookeeper.ZooKeeper: Client environment:java.class.path=/usr/hdp/current/hbase-master/conf:/usr/jdk64/jdk1.8.0_60/lib/tools.jar:/usr/hdp/current/hbase-master/bin/..:/usr/hdp/current/hbase-master/bin/../lib/activation-1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hbase-master/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hbase-master/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hbase-master/bin/../lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hbase-master/bin/../lib/asm-3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/avro-1.7.4.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-codec-1.9.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-collections-3.2.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-configuration-1.6.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-digester-1.8.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-el-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-httpclient-3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-io-2.4.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-logging-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-math-2.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-math3-3.1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-net-3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-client-2.7.1.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-framework-2.7.1.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-recipes-2.7.1.jar:/usr/hdp/current/hbase-master/bin/../lib/disruptor-3.3.0.jar:/usr/hdp/current/hbase-master/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-master/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-master/bin/../lib/guava-12.0.1.jar:/usr/hdp/current/hbase-master/bin/../lib/guice-3.0.jar:/usr/hdp/current/hbase-master/bin/../lib/guice-servlet-3.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations-1.1.2.2.4.0.0-169-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-client-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common-1.1.2.2.4.0.0-169-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-examples-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it-1.1.2.2.4.0.0-169-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-prefix-tree-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-prefix-tree.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-procedure-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-procedure.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-resource-bundle-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-resource-bundle.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rest-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rest.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server-1.1.2.2.4.0.0-169-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shell-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-thrift-1.1.2.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-master/bin/../lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/current/hbase-master/bin/../lib/httpclient-4.2.5.jar:/usr/hdp/current/hbase-master/bin/../lib/httpcore-4.2.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-core-2.2.3.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jamon-runtime-2.3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jasper-compiler-5.5.23.jar:/usr/hdp/current/hbase-master/bin/../lib/jasper-runtime-5.5.23.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.inject-1.jar:/usr/hdp/current/hbase-master/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hbase-master/bin/../lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hbase-master/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-master/bin/../lib/jcodings-1.0.8.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-client-1.9.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-core-1.9.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-guice-1.9.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-json-1.9.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-server-1.9.jar:/usr/hdp/current/hbase-master/bin/../lib/jets3t-0.9.0.jar:/usr/hdp/current/hbase-master/bin/../lib/jettison-1.3.3.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hbase-master/bin/../lib/joni-2.1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/jruby-complete-1.6.8.jar:/usr/hdp/current/hbase-master/bin/../lib/jsch-0.1.42.jar:/usr/hdp/current/hbase-master/bin/../lib/jsp-2.1-6.1.14.jar:/usr/hdp/current/hbase-master/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/hdp/current/hbase-master/bin/../lib/jsr305-1.3.9.jar:/usr/hdp/current/hbase-master/bin/../lib/junit-4.11.jar:/usr/hdp/current/hbase-master/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-master/bin/../lib/libthrift-0.9.0.jar:/usr/hdp/current/hbase-master/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-master/bin/../lib/metrics-core-2.2.0.jar:/usr/hdp/current/hbase-master/bin/../lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-3.2.4.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-all-4.0.23.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/ojdbc6.jar:/usr/hdp/current/hbase-master/bin/../lib/okhttp-2.4.0.jar:/usr/hdp/current/hbase-master/bin/../lib/okio-1.4.0.jar:/usr/hdp/current/hbase-master/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-master/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-master/bin/../lib/ranger-hbase-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/ranger-plugin-classloader-0.5.0.2.4.0.0-169.jar:/usr/hdp/current/hbase-master/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/hdp/current/hbase-master/bin/../lib/servlet-api-2.5.jar:/usr/hdp/current/hbase-master/bin/../lib/slf4j-api-1.7.7.jar:/usr/hdp/current/hbase-master/bin/../lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hbase-master/bin/../lib/spymemcached-2.11.6.jar:/usr/hdp/current/hbase-master/bin/../lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hbase-master/bin/../lib/xml-apis-1.3.04.jar:/usr/hdp/current/hbase-master/bin/../lib/xmlenc-0.52.jar:/usr/hdp/current/hbase-master/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/zookeeper.jar:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/spark-yarn-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ojdbc6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-hdfs-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-plugin-classloader-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-yarn-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okhttp-2.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okio-1.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//joda-time-2.9.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//zookeeper-3.4.6.2.4.0.0-169.jar::mysql-connector-java.jar:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/hadoop-annotations-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-annotations.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-aws-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-aws.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-azure-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-azure.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-common-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-common-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-common.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/hadoop-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/spark-yarn-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ojdbc6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-hdfs-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-plugin-classloader-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-yarn-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/zookeeper/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/zookeeper/zookeeper.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/ant-1.8.0.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/ant-launcher-1.8.0.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/backport-util-concurrent-3.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/classworlds-1.1-alpha-2.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/commons-codec-1.6.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/commons-io-2.2.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/commons-logging-1.1.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/httpclient-4.2.3.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/httpcore-4.2.3.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/jline-0.9.94.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/jsoup-1.7.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/log4j-1.2.16.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-artifact-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-model-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-profile-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-project-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/maven-settings-2.2.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/nekohtml-1.9.6.2.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/netty-3.7.0.Final.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/plexus-interpolation-1.11.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/plexus-utils-3.0.8.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/slf4j-api-1.6.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/wagon-file-1.0-beta-6.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/wagon-http-2.4.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/wagon-http-shared4-2.4.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/wagon-provider-api-2.4.jar:/usr/hdp/2.4.0.0-169/zookeeper/lib/xercesMinimal-1.9.6.2.jar:
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-327.3.1.el7.x86_64
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hbase
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2016-04-04 05:47:05,384 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2016-04-04 05:47:05,385 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181 sessionTimeout=90000 watcher=master:160000x0, quorum=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181, baseZNode=/hbase-unsecure
2016-04-04 05:47:05,403 INFO [main-SendThread(master2.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server master2.corp.mirrorplus.com/10.1.1.95:2181. Will not attempt to authenticate using SASL (unknown error)
2016-04-04 05:47:05,408 INFO [main-SendThread(master2.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Socket connection established to master2.corp.mirrorplus.com/10.1.1.95:2181, initiating session
2016-04-04 05:47:05,466 INFO [main-SendThread(master2.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server master2.corp.mirrorplus.com/10.1.1.95:2181, sessionid = 0x253e0a3fe3e0007, negotiated timeout = 40000
2016-04-04 05:47:05,551 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2016-04-04 05:47:05,551 INFO [RpcServer.listener,port=16000] ipc.RpcServer: RpcServer.listener,port=16000: starting
2016-04-04 05:47:05,612 INFO [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-04-04 05:47:05,617 INFO [main] http.HttpRequestLog: Http request log for http.requests.master is not defined
2016-04-04 05:47:05,631 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2016-04-04 05:47:05,633 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2016-04-04 05:47:05,633 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-04-04 05:47:05,633 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-04-04 05:47:05,654 INFO [main] http.HttpServer: Jetty bound to port 16010
2016-04-04 05:47:05,654 INFO [main] mortbay.log: jetty-6.1.26.hwx
2016-04-04 05:47:06,186 INFO [main] mortbay.log: Started SelectChannelConnector@0.0.0.0:16010
2016-04-04 05:47:06,191 INFO [main] master.HMaster: hbase.rootdir=hdfs://aimprodcluster/apps/hbase/data1, hbase.cluster.distributed=true
2016-04-04 05:47:06,204 INFO [main] master.HMaster: Adding backup master ZNode /hbase-unsecure/backup-masters/master1.corp.mirrorplus.com,16000,1459763224469
2016-04-04 05:47:06,289 INFO [master1:16000.activeMasterManager] master.ActiveMasterManager: Another master is the active master, master3.corp.mirrorplus.com,16000,1459762844022; waiting to become the next active master
2016-04-04 05:47:06,336 INFO [master/master1.corp.mirrorplus.com/10.1.1.94:16000] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xd8b2c68 connecting to ZooKeeper ensemble=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181
2016-04-04 05:47:06,336 INFO [master/master1.corp.mirrorplus.com/10.1.1.94:16000] zookeeper.ZooKeeper: Initiating client connection, connectString=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181 sessionTimeout=90000 watcher=hconnection-0xd8b2c680x0, quorum=master3.corp.mirrorplus.com:2181,master2.corp.mirrorplus.com:2181,master1.corp.mirrorplus.com:2181, baseZNode=/hbase-unsecure
2016-04-04 05:47:06,337 INFO [master/master1.corp.mirrorplus.com/10.1.1.94:16000-SendThread(master3.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server master3.corp.mirrorplus.com/10.1.10.20:2181. Will not attempt to authenticate using SASL (unknown error)
2016-04-04 05:47:06,338 INFO [master/master1.corp.mirrorplus.com/10.1.1.94:16000-SendThread(master3.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Socket connection established to master3.corp.mirrorplus.com/10.1.10.20:2181, initiating session
2016-04-04 05:47:06,346 INFO [master/master1.corp.mirrorplus.com/10.1.1.94:16000-SendThread(master3.corp.mirrorplus.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server master3.corp.mirrorplus.com/10.1.10.20:2181, sessionid = 0x353e0a406670007, negotiated timeout = 40000
2016-04-04 05:47:06,373 INFO [master/master1.corp.mirrorplus.com/10.1.1.94:16000] regionserver.HRegionServer: ClusterId : 562355a3-3569-4d2e-bf16-efaa82632c96
2016-04-04 05:47:09,438 INFO [master1:16000.activeMasterManager] master.ActiveMasterManager: Another master is the active master, master2.corp.mirrorplus.com,16000,1459763227559; waiting to become the next active master
... View more
04-04-2016
09:20 AM
2 Kudos
I am getting following error while listing table from hbase shell. Below is error list- [root@data1 hbase]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.0.0-169/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.4.0.0-169, r61dfb2b344f424a11f93b3f086eab815c1eb0b6a, Wed Feb 10 07:08:51 UTC 2016
hbase(main):001:0> list
TABLE
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2314)
at org.apache.hadoop.hbase.master.MasterRpcServices.getTableDescriptors(MasterRpcServices.java:853)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:53136)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) Hdfs directory structure--- [root@data1 hbase]# hdfs dfs -ls /apps/hbase
Found 2 items
drwxr-xr-x - hbase hdfs 0 2016-04-04 05:07 /apps/hbase/data
drwx--x--x - hbase hdfs 0 2016-03-31 08:12 /apps/hbase/staging Please let me know how to fix this?
... View more
Labels:
- Labels:
-
Apache HBase
03-24-2016
10:35 AM
Hi @Chris Nauroth, Thanks for the solution, It worked. I increased disk space, turned off hdfs safemode and started regionserver. It worked. Thanks, Raja Ray
... View more
03-24-2016
05:47 AM
1 Kudo
Hbase region server getting down. Require help. Below are error log- 2016-03-23 23:14:32,250 ERROR [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Memstore size is 1286992
2016-03-23 23:14:32,250 INFO [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Closed CUTOFF4,C31\x0916,1458649721550.fab6ecb6588e89c84cff626593274c25.
2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] handler.CloseRegionHandler: Closed CUTOFF4,C31\x0916,1458649721550.fab6ecb6588e89c84cff626593274c25.
2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] handler.CloseRegionHandler: Processing close of MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.
2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Closing MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.: disabling compactions & flushes
2016-03-23 23:14:32,250 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Updates disabled for region MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.
2016-03-23 23:14:32,252 INFO [StoreCloserThread-MONO,O11\x09156779\x093\x09152446845,1449055882623.b51afac320641a8fde6a8f545d70e084.-1] regionserver.HStore: Closed 1
2016-03-23 23:14:32,252 INFO [StoreCloserThread-MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786.-1] regionserver.HStore: Closed 1
2016-03-23 23:14:32,252 INFO [RS_CLOSE_REGION-fsdata1c:60020-0] regionserver.HRegion: Closed MONO,O11\x09156779\x093\x09152446845,1449055882623.b51afac320641a8fde6a8f545d70e084.
2016-03-23 23:14:32,252 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-0] handler.CloseRegionHandler: Closed MONO,O11\x09156779\x093\x09152446845,1449055882623.b51afac320641a8fde6a8f545d70e084.
2016-03-23 23:14:32,252 INFO [RS_CLOSE_REGION-fsdata1c:60020-2] regionserver.HRegion: Closed MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786.
2016-03-23 23:14:32,253 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-2] handler.CloseRegionHandler: Closed MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786.
2016-03-23 23:14:32,254 INFO [StoreCloserThread-MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.-1] regionserver.HStore: Closed 1
2016-03-23 23:14:32,255 INFO [RS_CLOSE_REGION-fsdata1c:60020-1] regionserver.HRegion: Closed MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.
2016-03-23 23:14:32,255 DEBUG [RS_CLOSE_REGION-fsdata1c:60020-1] handler.CloseRegionHandler: Closed MONO,O31\x09155971\x093\x0915226833,1449055882623.5665954c6641a2e2f930a4d9cd6003fc.
2016-03-23 23:14:32,444 INFO [regionserver60020] regionserver.HRegionServer: stopping server fsdata1c.corp.arc.com,60020,1452067957740; all regions closed.
2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier interrupted while waiting for notification from AsyncSyncer thread
2016-03-23 23:14:32,444 INFO [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier exiting
2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2016-03-23 23:14:32,444 INFO [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 exiting
2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2016-03-23 23:14:32,444 INFO [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 exiting
2016-03-23 23:14:32,444 DEBUG [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 exiting
2016-03-23 23:14:32,445 DEBUG [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 exiting
2016-03-23 23:14:32,445 DEBUG [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 exiting
2016-03-23 23:14:32,445 DEBUG [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2016-03-23 23:14:32,445 INFO [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter exiting
2016-03-23 23:14:32,445 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/WALs/fsdata1c.corp.arc.com,60020,1452067957740
2016-03-23 23:14:32,454 ERROR [regionserver60020] regionserver.HRegionServer: Close and delete failed
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot complete file /apps/hbase/data/WALs/fsdata1c.corp.arc.com,60020,1452067957740/fsdata1c.corp.arc.com%2C60020%2C1452067957740.1458771271979. Name node is in safe mode.
Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1201)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2994)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:647)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:484)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy18.complete(Unknown Source)
at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy18.complete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:404)
at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:272)
at com.sun.proxy.$Proxy19.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2116)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2100)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:119)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.close(FSHLog.java:941)
at org.apache.hadoop.hbase.regionserver.HRegionServer.closeWAL(HRegionServer.java:1185)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:998)
at java.lang.Thread.run(Thread.java:744)
2016-03-23 23:14:32,555 INFO [regionserver60020] regionserver.Leases: regionserver60020 closing leases
2016-03-23 23:14:32,555 INFO [regionserver60020] regionserver.Leases: regionserver60020 closed leases
2016-03-23 23:14:32,863 WARN [LeaseRenewer:hbase@fsmaster1c.corp.arc.com:8020] hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33] for 1018 seconds. Will retry shortly ...
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot renew lease for DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33. Name node is in safe mode.
Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1201)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4132)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:767)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:588)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy18.renewLease(Unknown Source)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy18.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:532)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:272)
at com.sun.proxy.$Proxy19.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:791)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:744)
2016-03-23 23:14:33,865 WARN [LeaseRenewer:hbase@fsmaster1c.corp.arc.com:8020] hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33] for 1019 seconds. Will retry shortly ...
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot renew lease for DFSClient_hb_rs_fsdata1c.corp.arc.com,60020,1452067957740_752162766_33. Name node is in safe mode.
Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1201)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4132)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:767)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:588)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy18.renewLease(Unknown Source)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy18.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:532)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:272)
at com.sun.proxy.$Proxy19.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:791)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:744)
2016-03-23 23:14:34,055 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer$PeriodicMemstoreFlusher: regionserver60020.periodicFlusher exiting
2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Split Thread to finish...
2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish...
2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish...
2016-03-23 23:14:34,055 INFO [regionserver60020] regionserver.CompactSplitThread: Waiting for Small Compaction Thread to finish...
2016-03-23 23:14:34,060 INFO [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x151389443dd01d0
2016-03-23 23:14:34,063 INFO [regionserver60020] zookeeper.ZooKeeper: Session: 0x151389443dd01d0 closed
2016-03-23 23:14:34,063 INFO [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-03-23 23:14:34,067 INFO [regionserver60020] zookeeper.ZooKeeper: Session: 0x251389443e8021c closed
2016-03-23 23:14:34,067 INFO [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-03-23 23:14:34,068 INFO [regionserver60020] regionserver.HRegionServer: stopping server fsdata1c.corp.arc.com,60020,1452067957740; zookeeper connection closed.
2016-03-23 23:14:34,068 INFO [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
2016-03-23 23:14:34,068 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2403)
2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@64b9f908
2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.HRegionServer: STOPPED: Shutdown hook
2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2016-03-23 23:14:34,071 INFO [Thread-11] regionserver.ShutdownHook: Shutdown hook finished.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase