2017-05-24 00:30:32,776 WARN [B.defaultRpcServer.handler=1,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495585817030,"responsesize":416,"method":"Scan","processingtimems":15746,"client":"hadoop6:43280","queuetimems":0,"class":"HRegionServer"} 2017-05-24 00:30:47,701 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.84 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:30:49,359 INFO [sync.3] wal.FSHLog: Slow sync cost: 213 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 00:30:49,963 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55137686, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/7230060d0fcc4eee91b01ecea97bd00c 2017-05-24 00:30:49,985 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/7230060d0fcc4eee91b01ecea97bd00c, entries=124615, sequenceid=55137686, filesize=19.7 M 2017-05-24 00:30:49,988 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.84 MB/135100360, currentsize=14.37 MB/15066432 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 2287ms, sequenceid=55137686, compaction requested=true 2017-05-24 00:30:50,013 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:30:50,013 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:30:50,024 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=61587, currentSize=4065225336, freeSize=208272520, maxSize=4273497856, heapSize=4065225336, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:30:50,919 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585631672 with entries=188, filesize=122.97 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585850875 2017-05-24 00:30:51,873 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into b4a00903629245939165380e1ad869cb(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 00:30:51,874 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18805795726308993; duration=1sec 2017-05-24 00:31:00,229 INFO [sync.4] wal.FSHLog: Slow sync cost: 561 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:31:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.84 GB, freeSize=141.76 MB, max=3.98 GB, blockCount=62479, accesses=121786268, hits=61459718, hitRatio=50.47%, , cachingAccesses=117843981, cachingHits=61402133, cachingHitsRatio=52.10%, evictions=24552, evicted=56146355, evictedPerRun=2286.834228515625 2017-05-24 00:31:49,399 INFO [RS_OPEN_META-aps-hadoop6:16020-0-MetaLogRoller] wal.FSHLog: Slow sync cost: 1763 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 00:31:49,590 INFO [RS_OPEN_META-aps-hadoop6:16020-0-MetaLogRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899..meta.1495582303574.meta with entries=0, filesize=91 B; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899..meta.1495585907617.meta 2017-05-24 00:31:49,592 INFO [RS_OPEN_META-aps-hadoop6:16020-0-MetaLogRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899..meta.1495582303574.meta to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899..meta.1495582303574.meta 2017-05-24 00:33:17,037 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.74 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:33:17,799 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55138006, memsize=128.7 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/aa6fb09daf3c47f89511be554ff15b8b 2017-05-24 00:33:17,817 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/aa6fb09daf3c47f89511be554ff15b8b, entries=124615, sequenceid=55138006, filesize=19.7 M 2017-05-24 00:33:17,818 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.74 MB/134997728, currentsize=5.55 MB/5823464 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 781ms, sequenceid=55138006, compaction requested=true 2017-05-24 00:34:12,801 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 145 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:34:12,834 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585850875 with entries=220, filesize=122.93 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586052493 2017-05-24 00:35:13,214 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 129.03 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:35:14,446 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55138325, memsize=129.0 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/96f566554add4906855cf05d3a06697f 2017-05-24 00:35:14,477 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/96f566554add4906855cf05d3a06697f, entries=124615, sequenceid=55138325, filesize=19.7 M 2017-05-24 00:35:14,481 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~129.03 MB/135302120, currentsize=5.38 MB/5643112 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1267ms, sequenceid=55138325, compaction requested=true 2017-05-24 00:35:14,486 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:35:14,486 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:35:14,492 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62061, currentSize=4108749752, freeSize=164748104, maxSize=4273497856, heapSize=4108749752, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:35:16,352 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 5093895952d148ebb24ed35c4753fa7f(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 00:35:16,353 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18806060199699729; duration=1sec 2017-05-24 00:35:16,810 INFO [sync.3] wal.FSHLog: Slow sync cost: 228 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 00:36:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.92 GB, freeSize=65.31 MB, max=3.98 GB, blockCount=63481, accesses=122978440, hits=62078539, hitRatio=50.48%, , cachingAccesses=119034383, cachingHits=62020954, cachingHitsRatio=52.10%, evictions=24773, evicted=56716937, evictedPerRun=2289.4658203125 2017-05-24 00:36:12,251 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586052493 with entries=216, filesize=122.20 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586172137 2017-05-24 00:36:12,252 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585850875 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495585850875 2017-05-24 00:36:41,826 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115. after a delay of 16739 2017-05-24 00:36:51,826 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115. after a delay of 7022 2017-05-24 00:36:58,566 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115., current region memstore size 61.91 KB, and 2/9 column families' memstores are being flushed. 2017-05-24 00:36:58,566 INFO [MemStoreFlusher.0] regionserver.HRegion: Flushing Column Family: Ra which was occupying 52.52 KB of memstore. 2017-05-24 00:36:58,566 INFO [MemStoreFlusher.0] regionserver.HRegion: Flushing Column Family: info which was occupying 10.19 KB of memstore. 2017-05-24 00:36:58,606 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=67524, memsize=52.1 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/CampaignGoal/3097db2b5f88e47afafb31b638d68115/.tmp/0b873e8767114386a8825ba523c269c2 2017-05-24 00:36:58,636 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=67524, memsize=9.8 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/CampaignGoal/3097db2b5f88e47afafb31b638d68115/.tmp/5b76f5e10c6645328aaa015220efe419 2017-05-24 00:36:58,656 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/CampaignGoal/3097db2b5f88e47afafb31b638d68115/Ra/0b873e8767114386a8825ba523c269c2, entries=248, sequenceid=67524, filesize=7.7 K 2017-05-24 00:36:58,669 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/CampaignGoal/3097db2b5f88e47afafb31b638d68115/info/5b76f5e10c6645328aaa015220efe419, entries=56, sequenceid=67524, filesize=5.3 K 2017-05-24 00:36:58,670 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~61.91 KB/63400, currentsize=0 B/0 for region CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115. in 104ms, sequenceid=67524, compaction requested=false 2017-05-24 00:37:12,214 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.86 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:37:15,886 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55138645, memsize=128.9 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/be5e4bedf16e492287a7833dc4dc6e81 2017-05-24 00:37:15,905 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/be5e4bedf16e492287a7833dc4dc6e81, entries=124615, sequenceid=55138645, filesize=19.7 M 2017-05-24 00:37:15,907 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.86 MB/135121768, currentsize=24.06 MB/25231648 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 3692ms, sequenceid=55138645, compaction requested=true 2017-05-24 00:38:05,748 INFO [sync.0] wal.FSHLog: Slow sync cost: 353 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 00:38:15,801 INFO [sync.2] wal.FSHLog: Slow sync cost: 277 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 00:39:03,528 INFO [sync.2] wal.FSHLog: Slow sync cost: 1846 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 00:39:08,927 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586172137 with entries=900, filesize=122.46 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586348846 2017-05-24 00:39:08,928 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495582350623 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495582350623 2017-05-24 00:39:08,952 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495582805159 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495582805159 2017-05-24 00:39:10,022 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.84 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:39:10,821 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55138964, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/899bf5bcd4584d06905d543b8f8a3aac 2017-05-24 00:39:10,851 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/899bf5bcd4584d06905d543b8f8a3aac, entries=124615, sequenceid=55138964, filesize=19.7 M 2017-05-24 00:39:10,857 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.84 MB/135101448, currentsize=4.45 MB/4666680 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 835ms, sequenceid=55138964, compaction requested=true 2017-05-24 00:39:10,861 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:39:10,861 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:39:10,864 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=61810, currentSize=4101531816, freeSize=171966040, maxSize=4273497856, heapSize=4101531816, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:39:12,632 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 0038d9531d774efd975a0d7f59784618(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 00:39:12,632 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18806296574325127; duration=1sec 2017-05-24 00:39:39,389 INFO [sync.2] wal.FSHLog: Slow sync cost: 325 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:39:47,992 INFO [sync.3] wal.FSHLog: Slow sync cost: 3905 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:40:04,496 INFO [sync.0] wal.FSHLog: Slow sync cost: 249 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:40:14,492 INFO [sync.2] wal.FSHLog: Slow sync cost: 188 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:40:23,591 INFO [sync.0] wal.FSHLog: Slow sync cost: 552 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:40:23,591 INFO [sync.1] wal.FSHLog: Slow sync cost: 524 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:40:30,756 INFO [sync.2] wal.FSHLog: Slow sync cost: 1442 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:41:01,431 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.83 GB, freeSize=157.20 MB, max=3.98 GB, blockCount=61876, accesses=124520992, hits=62848955, hitRatio=50.47%, , cachingAccesses=120575165, cachingHits=62791370, cachingHitsRatio=52.08%, evictions=25072, evicted=57488903, evictedPerRun=2292.952392578125 2017-05-24 00:41:14,040 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.39 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:41:14,962 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55139286, memsize=128.4 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/cd75df2a57c64beda60c3fce56d588cd 2017-05-24 00:41:14,979 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/cd75df2a57c64beda60c3fce56d588cd, entries=124615, sequenceid=55139286, filesize=19.7 M 2017-05-24 00:41:14,981 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.39 MB/134624944, currentsize=9.15 MB/9592192 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 941ms, sequenceid=55139286, compaction requested=true 2017-05-24 00:41:34,424 INFO [sync.0] wal.FSHLog: Slow sync cost: 220 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:41:35,154 INFO [sync.2] wal.FSHLog: Slow sync cost: 457 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:41:59,607 INFO [sync.3] wal.FSHLog: Slow sync cost: 256 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:42:04,190 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586348846 with entries=3190, filesize=121.75 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586524149 2017-05-24 00:43:06,812 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.08 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:43:07,690 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55139617, memsize=128.1 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/1c0b5d728d1c4abdb232f55d0a04de53 2017-05-24 00:43:07,711 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/1c0b5d728d1c4abdb232f55d0a04de53, entries=124615, sequenceid=55139617, filesize=19.7 M 2017-05-24 00:43:07,712 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.08 MB/134297560, currentsize=11.09 MB/11631320 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 900ms, sequenceid=55139617, compaction requested=true 2017-05-24 00:43:07,715 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:43:07,715 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:43:07,720 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62902, currentSize=4180658168, freeSize=92839688, maxSize=4273497856, heapSize=4180658168, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:43:09,491 INFO [sync.2] wal.FSHLog: Slow sync cost: 847 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:43:09,542 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 034d40f143eb4348819a706c676ed983(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 00:43:09,544 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18806533429038750; duration=1sec 2017-05-24 00:43:14,016 INFO [sync.0] wal.FSHLog: Slow sync cost: 323 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:43:28,879 INFO [sync.3] wal.FSHLog: Slow sync cost: 187 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:43:39,116 INFO [sync.1] wal.FSHLog: Slow sync cost: 297 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:43:49,188 INFO [sync.2] wal.FSHLog: Slow sync cost: 319 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:43:59,272 INFO [sync.3] wal.FSHLog: Slow sync cost: 319 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-485b95df-5888-4104-8455-5448e0f7846b,DISK]] 2017-05-24 00:44:02,347 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586524149 with entries=2010, filesize=121.69 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586642308 2017-05-24 00:44:04,088 INFO [sync.3] wal.FSHLog: Slow sync cost: 1197 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:45:03,100 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.03 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:45:03,935 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55139949, memsize=128.0 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/1416d283efed4339b389a6eed3465747 2017-05-24 00:45:03,962 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/1416d283efed4339b389a6eed3465747, entries=124615, sequenceid=55139949, filesize=19.7 M 2017-05-24 00:45:03,990 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.03 MB/134251944, currentsize=7.96 MB/8347504 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 890ms, sequenceid=55139949, compaction requested=true 2017-05-24 00:45:05,159 INFO [sync.1] wal.FSHLog: Slow sync cost: 388 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:45:05,952 INFO [sync.2] wal.FSHLog: Slow sync cost: 787 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:45:08,114 INFO [sync.4] wal.FSHLog: Slow sync cost: 2106 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:45:11,508 INFO [sync.2] wal.FSHLog: Slow sync cost: 159 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:45:45,633 INFO [sync.0] wal.FSHLog: Slow sync cost: 330 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:45:50,623 INFO [sync.2] wal.FSHLog: Slow sync cost: 312 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:45:50,628 INFO [sync.3] wal.FSHLog: Slow sync cost: 310 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:46:00,734 INFO [sync.0] wal.FSHLog: Slow sync cost: 350 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:46:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.83 GB, freeSize=152.35 MB, max=3.98 GB, blockCount=61878, accesses=126588907, hits=63915896, hitRatio=50.49%, , cachingAccesses=122641310, cachingHits=63858311, cachingHitsRatio=52.07%, evictions=25459, evicted=58488106, evictedPerRun=2297.344970703125 2017-05-24 00:46:01,475 INFO [sync.0] wal.FSHLog: Slow sync cost: 311 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:46:44,371 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586642308 with entries=3890, filesize=121.70 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586804332 2017-05-24 00:46:50,217 INFO [sync.4] wal.FSHLog: Slow sync cost: 772 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 00:46:50,217 INFO [sync.0] wal.FSHLog: Slow sync cost: 732 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 00:46:51,624 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.13 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:46:52,556 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55140280, memsize=128.1 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/dbbc625e1c87489a920effcf175c44e4 2017-05-24 00:46:52,607 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/dbbc625e1c87489a920effcf175c44e4, entries=124615, sequenceid=55140280, filesize=19.7 M 2017-05-24 00:46:52,609 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.13 MB/134349032, currentsize=4.33 MB/4537792 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 985ms, sequenceid=55140280, compaction requested=true 2017-05-24 00:46:52,613 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:46:52,613 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:46:52,618 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62225, currentSize=4136701440, freeSize=136796416, maxSize=4273497856, heapSize=4136701440, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:46:54,399 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 9794b6d6d85847a996a3fdf84bb05cef(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 00:46:54,400 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18806758326762849; duration=1sec 2017-05-24 00:48:34,422 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586804332 with entries=2439, filesize=121.89 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586914366 2017-05-24 00:48:41,621 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.59 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:48:42,634 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55140612, memsize=128.6 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/626640a9a0d34b609755dd0072564a03 2017-05-24 00:48:42,652 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/626640a9a0d34b609755dd0072564a03, entries=124615, sequenceid=55140612, filesize=19.7 M 2017-05-24 00:48:42,653 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.59 MB/134838912, currentsize=1.60 MB/1680808 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1032ms, sequenceid=55140612, compaction requested=true 2017-05-24 00:49:31,112 INFO [sync.3] wal.FSHLog: Slow sync cost: 167 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:49:31,112 INFO [sync.2] wal.FSHLog: Slow sync cost: 167 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:49:39,084 INFO [sync.1] wal.FSHLog: Slow sync cost: 562 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:49:39,086 INFO [sync.2] wal.FSHLog: Slow sync cost: 563 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:49:41,283 INFO [sync.1] wal.FSHLog: Slow sync cost: 313 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:49:41,283 INFO [sync.2] wal.FSHLog: Slow sync cost: 313 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:49:49,637 INFO [sync.3] wal.FSHLog: Slow sync cost: 1128 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:49:49,637 INFO [sync.4] wal.FSHLog: Slow sync cost: 1083 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:50:35,119 INFO [sync.2] wal.FSHLog: Slow sync cost: 319 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 00:51:01,433 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.88 GB, freeSize=97.93 MB, max=3.98 GB, blockCount=62741, accesses=129188021, hits=65213604, hitRatio=50.48%, , cachingAccesses=125238654, cachingHits=65156019, cachingHitsRatio=52.03%, evictions=25962, evicted=59786880, evictedPerRun=2302.861083984375 2017-05-24 00:51:05,286 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.76 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:51:06,635 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55140935, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/3568e9b95c3c4c8f962859b72d1d3689 2017-05-24 00:51:06,651 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/3568e9b95c3c4c8f962859b72d1d3689, entries=124615, sequenceid=55140935, filesize=19.7 M 2017-05-24 00:51:06,664 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.76 MB/135019728, currentsize=9.81 MB/10288680 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1378ms, sequenceid=55140935, compaction requested=true 2017-05-24 00:51:06,667 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:51:06,667 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:51:06,672 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62652, currentSize=4165004888, freeSize=108492968, maxSize=4273497856, heapSize=4165004888, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:51:07,821 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586914366 with entries=5441, filesize=121.65 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587067784 2017-05-24 00:51:08,590 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 690cd819d36a45c48ab46fa12eae3c9e(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 00:51:08,592 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18807012381057941; duration=1sec 2017-05-24 00:51:14,979 INFO [sync.4] wal.FSHLog: Slow sync cost: 292 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]] 2017-05-24 00:51:19,978 INFO [sync.2] wal.FSHLog: Slow sync cost: 286 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]] 2017-05-24 00:51:40,006 INFO [sync.2] wal.FSHLog: Slow sync cost: 226 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]] 2017-05-24 00:51:41,826 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad. after a delay of 10995 2017-05-24 00:51:51,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad. after a delay of 17066 2017-05-24 00:51:52,823 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad., current region memstore size 435.70 KB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:51:52,864 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=2201, memsize=435.7 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/SocialMediaAnalyticsRecipients/8111e9fc6c837bbac6a95ddea20101ad/.tmp/1880a093ee1f4d1bbd9b47fd75c1ecc1 2017-05-24 00:51:52,913 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/SocialMediaAnalyticsRecipients/8111e9fc6c837bbac6a95ddea20101ad/info/1880a093ee1f4d1bbd9b47fd75c1ecc1, entries=2056, sequenceid=2201, filesize=25.3 K 2017-05-24 00:51:52,915 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~435.70 KB/446152, currentsize=0 B/0 for region SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad. in 92ms, sequenceid=2201, compaction requested=true 2017-05-24 00:51:52,917 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad. 2017-05-24 00:51:52,917 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/SocialMediaAnalyticsRecipients/8111e9fc6c837bbac6a95ddea20101ad/.tmp, totalSize=1.3 M 2017-05-24 00:51:52,991 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=63533, currentSize=4223234872, freeSize=50262984, maxSize=4273497856, heapSize=4223301112, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:51:53,301 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 (all) file(s) in info of SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad. into c7648c2bbbae40c49d0e5f528a1e9467(size=1.2 M), total size for store is 1.2 M. This selection was in queue for 0sec, and took 0sec to execute. 2017-05-24 00:51:53,302 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad., storeName=info, fileCount=3, fileSize=1.3 M, priority=7, time=18807058630871532; duration=0sec 2017-05-24 00:52:08,969 INFO [sync.3] wal.FSHLog: Slow sync cost: 211 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]] 2017-05-24 00:52:08,969 INFO [sync.4] wal.FSHLog: Slow sync cost: 204 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]] 2017-05-24 00:52:09,784 INFO [sync.0] wal.FSHLog: Slow sync cost: 206 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]] 2017-05-24 00:52:09,784 INFO [sync.1] wal.FSHLog: Slow sync cost: 206 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]] 2017-05-24 00:52:57,026 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 129.20 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:52:58,024 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55141259, memsize=129.2 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/65eee2d7076b46918a6b83e31faef9e4 2017-05-24 00:52:58,041 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/65eee2d7076b46918a6b83e31faef9e4, entries=124615, sequenceid=55141259, filesize=19.7 M 2017-05-24 00:52:58,044 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~129.20 MB/135477656, currentsize=6.51 MB/6829016 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1017ms, sequenceid=55141259, compaction requested=true 2017-05-24 00:53:10,347 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587067784 with entries=1522, filesize=121.85 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587190237 2017-05-24 00:53:10,348 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495583468342 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495583468342 2017-05-24 00:53:10,361 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495584047017 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495584047017 2017-05-24 00:53:10,378 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495584090536 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495584090536 2017-05-24 00:54:03,025 INFO [sync.0] wal.FSHLog: Slow sync cost: 161 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:54:03,025 INFO [sync.4] wal.FSHLog: Slow sync cost: 191 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:54:50,144 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.84 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:54:51,307 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55141578, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/478f5ef55bb64a6a95e33c91eb95d827 2017-05-24 00:54:51,326 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/478f5ef55bb64a6a95e33c91eb95d827, entries=124615, sequenceid=55141578, filesize=19.7 M 2017-05-24 00:54:51,327 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.84 MB/135100360, currentsize=7.78 MB/8156960 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1183ms, sequenceid=55141578, compaction requested=true 2017-05-24 00:54:51,330 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:54:51,330 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:54:51,334 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=63461, currentSize=4218407184, freeSize=55090672, maxSize=4273497856, heapSize=4218407184, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:54:53,213 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 99eba003a335493ebe82c8d012e7e5f7(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 00:54:53,213 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18807237043794279; duration=1sec 2017-05-24 00:54:58,358 INFO [sync.3] wal.FSHLog: Slow sync cost: 122 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:55:08,516 INFO [sync.0] wal.FSHLog: Slow sync cost: 218 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:55:13,511 INFO [sync.3] wal.FSHLog: Slow sync cost: 290 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:55:15,217 INFO [sync.1] wal.FSHLog: Slow sync cost: 1585 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK]] 2017-05-24 00:55:43,973 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587190237 with entries=2543, filesize=122.75 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587343879 2017-05-24 00:55:52,630 INFO [sync.3] wal.FSHLog: Slow sync cost: 1186 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK]] 2017-05-24 00:56:01,428 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.82 GB, freeSize=159.83 MB, max=3.98 GB, blockCount=61763, accesses=131648915, hits=66442791, hitRatio=50.47%, , cachingAccesses=127695866, cachingHits=66385206, cachingHitsRatio=51.99%, evictions=26438, evicted=61015879, evictedPerRun=2307.885498046875 2017-05-24 00:56:32,351 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.74 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:56:33,193 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55141898, memsize=128.7 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/64e664165177442fafd463c3851df5cf 2017-05-24 00:56:33,213 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/64e664165177442fafd463c3851df5cf, entries=124615, sequenceid=55141898, filesize=19.7 M 2017-05-24 00:56:33,215 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.74 MB/134997728, currentsize=5.55 MB/5823464 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 864ms, sequenceid=55141898, compaction requested=true 2017-05-24 00:57:32,358 INFO [sync.4] wal.FSHLog: Slow sync cost: 188 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK]] 2017-05-24 00:57:48,508 INFO [sync.4] wal.FSHLog: Slow sync cost: 285 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK]] 2017-05-24 00:58:16,243 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 106 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK]] 2017-05-24 00:58:16,304 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587343879 with entries=3256, filesize=123.86 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587496042 2017-05-24 00:58:18,133 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 129.03 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:58:18,974 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55142217, memsize=129.0 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/4face4ce2e8a4729811838116e249c5a 2017-05-24 00:58:18,999 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/4face4ce2e8a4729811838116e249c5a, entries=124615, sequenceid=55142217, filesize=19.7 M 2017-05-24 00:58:19,002 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~129.03 MB/135302120, currentsize=5.38 MB/5643112 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 869ms, sequenceid=55142217, compaction requested=true 2017-05-24 00:58:19,006 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 00:58:19,006 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 00:58:19,011 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62253, currentSize=4138473672, freeSize=135024184, maxSize=4273497856, heapSize=4138473672, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:58:21,403 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into e37d9b2ae92f41b2bbf67564f42aa2d6(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 2sec to execute. 2017-05-24 00:58:21,403 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18807444719489859; duration=2sec 2017-05-24 00:59:17,607 INFO [sync.3] wal.FSHLog: Slow sync cost: 332 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:59:17,608 INFO [sync.4] wal.FSHLog: Slow sync cost: 287 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK]] 2017-05-24 00:59:26,174 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.61 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:59:27,365 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=76763, memsize=128.6 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/fb6241f3a0c34dd78c2d4ed5fa8d06bf 2017-05-24 00:59:27,397 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/fb6241f3a0c34dd78c2d4ed5fa8d06bf, entries=660982, sequenceid=76763, filesize=6.5 M 2017-05-24 00:59:27,399 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.61 MB/134856808, currentsize=26.41 MB/27691152 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 1225ms, sequenceid=76763, compaction requested=true 2017-05-24 00:59:27,409 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. 2017-05-24 00:59:27,409 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp, totalSize=89.5 M 2017-05-24 00:59:27,500 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=61844, currentSize=4111332856, freeSize=162165000, maxSize=4273497856, heapSize=4111332856, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 00:59:29,326 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587496042 with entries=3065, filesize=121.75 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587569272 2017-05-24 00:59:32,520 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.55 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 00:59:33,657 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=76884, memsize=128.5 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/141440f93a6d40f787ac509e1d33decb 2017-05-24 00:59:33,691 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/141440f93a6d40f787ac509e1d33decb, entries=656040, sequenceid=76884, filesize=6.7 M 2017-05-24 00:59:33,693 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.55 MB/134789328, currentsize=24.23 MB/25402064 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 1173ms, sequenceid=76884, compaction requested=true 2017-05-24 00:59:41,787 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into 680fe41a3d1d422899335c487f5fea61(size=89.6 M), total size for store is 692.5 M. This selection was in queue for 0sec, and took 14sec to execute. 2017-05-24 00:59:41,787 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., storeName=info, fileCount=3, fileSize=89.5 M, priority=5, time=18807513123017165; duration=14sec 2017-05-24 00:59:59,738 INFO [sync.3] wal.FSHLog: Slow sync cost: 184 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK]] 2017-05-24 00:59:59,739 INFO [sync.4] wal.FSHLog: Slow sync cost: 180 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK]] 2017-05-24 01:00:07,624 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.86 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:00:08,447 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55142537, memsize=128.9 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/d9309e3493ba4de6b5c43cc0f650da27 2017-05-24 01:00:08,477 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/d9309e3493ba4de6b5c43cc0f650da27, entries=124615, sequenceid=55142537, filesize=19.7 M 2017-05-24 01:00:08,484 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.86 MB/135121768, currentsize=4.29 MB/4502456 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 860ms, sequenceid=55142537, compaction requested=true 2017-05-24 01:00:33,573 INFO [sync.2] wal.FSHLog: Slow sync cost: 161 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK]] 2017-05-24 01:00:33,573 INFO [sync.1] wal.FSHLog: Slow sync cost: 165 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK]] 2017-05-24 01:00:59,780 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587569272 with entries=3822, filesize=121.88 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587659710 2017-05-24 01:01:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.85 GB, freeSize=128.52 MB, max=3.98 GB, blockCount=62258, accesses=134351356, hits=67787515, hitRatio=50.46%, , cachingAccesses=130386236, cachingHits=67729929, cachingHitsRatio=51.95%, evictions=26959, evicted=62361037, evictedPerRun=2313.1806640625 2017-05-24 01:01:03,095 INFO [sync.2] wal.FSHLog: Slow sync cost: 105 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:01:03,095 INFO [sync.1] wal.FSHLog: Slow sync cost: 105 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:01:03,095 INFO [sync.0] wal.FSHLog: Slow sync cost: 128 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:01:33,550 INFO [sync.3] wal.FSHLog: Slow sync cost: 217 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:01:51,846 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.84 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:01:52,748 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55142856, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/9d1a2faa50ec4d3586658a0a3438eb6b 2017-05-24 01:01:52,773 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/9d1a2faa50ec4d3586658a0a3438eb6b, entries=124615, sequenceid=55142856, filesize=19.7 M 2017-05-24 01:01:52,775 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.84 MB/135101448, currentsize=4.45 MB/4666680 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 929ms, sequenceid=55142856, compaction requested=true 2017-05-24 01:01:52,782 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:01:52,782 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:01:52,788 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62266, currentSize=4139329288, freeSize=134168568, maxSize=4273497856, heapSize=4139329288, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:01:54,848 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into f6bdd543c61d4027aed3584ade9a34ef(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 2sec to execute. 2017-05-24 01:01:54,848 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18807658495196027; duration=2sec 2017-05-24 01:03:36,206 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587659710 with entries=3611, filesize=122.97 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587816152 2017-05-24 01:03:41,314 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.39 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:03:42,314 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55143178, memsize=128.4 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/eaf36e6775bb471e86e866486678e197 2017-05-24 01:03:42,389 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/eaf36e6775bb471e86e866486678e197, entries=124615, sequenceid=55143178, filesize=19.7 M 2017-05-24 01:03:42,392 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.39 MB/134624944, currentsize=9.15 MB/9592192 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1078ms, sequenceid=55143178, compaction requested=true 2017-05-24 01:05:24,986 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.08 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:05:25,044 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587816152 with entries=2108, filesize=122.00 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587924986 2017-05-24 01:05:25,980 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55143509, memsize=128.1 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/6f6c76cc83824c0ebffcfbc0ff738091 2017-05-24 01:05:26,016 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/6f6c76cc83824c0ebffcfbc0ff738091, entries=124615, sequenceid=55143509, filesize=19.7 M 2017-05-24 01:05:26,019 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.08 MB/134297560, currentsize=8.72 MB/9144656 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1033ms, sequenceid=55143509, compaction requested=true 2017-05-24 01:05:26,026 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:05:26,026 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:05:26,030 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=63090, currentSize=4186742112, freeSize=86755744, maxSize=4273497856, heapSize=4186742112, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:05:28,035 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into ada2fbed1fd04ad6a6207bb5e39b7235(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 2sec to execute. 2017-05-24 01:05:28,035 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18807871739338496; duration=2sec 2017-05-24 01:06:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.92 GB, freeSize=58.55 MB, max=3.98 GB, blockCount=63470, accesses=137460003, hits=69365563, hitRatio=50.46%, , cachingAccesses=133491343, cachingHits=69307977, cachingHitsRatio=51.92%, evictions=27550, evicted=63886877, evictedPerRun=2318.94287109375 2017-05-24 01:07:09,593 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.03 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:07:10,453 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55143841, memsize=128.0 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/e5d29367b97d4959bf6bf3d8bcb7ecf1 2017-05-24 01:07:10,485 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/e5d29367b97d4959bf6bf3d8bcb7ecf1, entries=124615, sequenceid=55143841, filesize=19.7 M 2017-05-24 01:07:10,487 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.03 MB/134251944, currentsize=8.43 MB/8838976 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 894ms, sequenceid=55143841, compaction requested=true 2017-05-24 01:07:41,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region WebAnalyticsUserFlow,cccccccc,1494598376307.ded232c57e472e24c9d63b87016dc7e8. after a delay of 15078 2017-05-24 01:07:51,836 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region WebAnalyticsUserFlow,cccccccc,1494598376307.ded232c57e472e24c9d63b87016dc7e8. after a delay of 19619 2017-05-24 01:07:54,878 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587924986 with entries=2900, filesize=122.20 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588074802 2017-05-24 01:07:56,906 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cccccccc,1494598376307.ded232c57e472e24c9d63b87016dc7e8., current region memstore size 994.83 KB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:07:56,965 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=154, memsize=994.8 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/ded232c57e472e24c9d63b87016dc7e8/.tmp/c36fa936506d4ad0ba3f4a6cfaecf0bd 2017-05-24 01:07:57,003 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/ded232c57e472e24c9d63b87016dc7e8/info/c36fa936506d4ad0ba3f4a6cfaecf0bd, entries=5040, sequenceid=154, filesize=58.4 K 2017-05-24 01:07:57,005 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~994.83 KB/1018704, currentsize=0 B/0 for region WebAnalyticsUserFlow,cccccccc,1494598376307.ded232c57e472e24c9d63b87016dc7e8. in 99ms, sequenceid=154, compaction requested=false 2017-05-24 01:08:51,542 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.13 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:08:52,626 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55144172, memsize=128.1 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/ad7ab4ebb1bf4d28a25f9a0776b1fffa 2017-05-24 01:08:52,657 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/ad7ab4ebb1bf4d28a25f9a0776b1fffa, entries=124615, sequenceid=55144172, filesize=19.7 M 2017-05-24 01:08:52,659 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.13 MB/134349032, currentsize=4.33 MB/4537792 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 1118ms, sequenceid=55144172, compaction requested=true 2017-05-24 01:08:52,665 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:08:52,665 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:08:52,669 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62150, currentSize=4124229760, freeSize=149268096, maxSize=4273497856, heapSize=4124229760, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:08:54,614 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into d3085321430742e1aca6e7467c5144c7(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 01:08:54,614 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18808078378698425; duration=1sec 2017-05-24 01:08:58,877 INFO [sync.2] wal.FSHLog: Slow sync cost: 412 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK]] 2017-05-24 01:09:46,353 INFO [sync.1] wal.FSHLog: Slow sync cost: 168 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK]] 2017-05-24 01:09:46,353 INFO [sync.0] wal.FSHLog: Slow sync cost: 176 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK]] 2017-05-24 01:09:46,362 INFO [sync.2] wal.FSHLog: Slow sync cost: 152 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK]] 2017-05-24 01:09:46,588 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 120 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK]] 2017-05-24 01:09:46,619 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588074802 with entries=2425, filesize=123.62 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588186458 2017-05-24 01:09:46,623 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495584311808 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495584311808 2017-05-24 01:09:49,059 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.69 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:09:50,246 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=77006, memsize=128.7 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/45d3a8a04c364e99b9be067a2e855161 2017-05-24 01:09:50,329 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/45d3a8a04c364e99b9be067a2e855161, entries=662354, sequenceid=77006, filesize=6.5 M 2017-05-24 01:09:50,332 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.69 MB/134944288, currentsize=23.81 MB/24963472 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 1273ms, sequenceid=77006, compaction requested=true 2017-05-24 01:09:50,346 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. 2017-05-24 01:09:50,346 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp, totalSize=102.7 M 2017-05-24 01:09:50,351 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=63049, currentSize=4177116312, freeSize=96381544, maxSize=4273497856, heapSize=4177116312, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:09:56,397 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.71 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:09:57,624 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=77127, memsize=128.7 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/d3c70319837245949f5b176005773f5d 2017-05-24 01:09:57,672 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/d3c70319837245949f5b176005773f5d, entries=658336, sequenceid=77127, filesize=6.6 M 2017-05-24 01:09:57,674 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.71 MB/134961960, currentsize=14.76 MB/15479440 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 1277ms, sequenceid=77127, compaction requested=true 2017-05-24 01:10:05,397 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into 3c9005cdf26c44d797ce9b5a05a9ec38(size=102.7 M), total size for store is 705.6 M. This selection was in queue for 0sec, and took 15sec to execute. 2017-05-24 01:10:05,398 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., storeName=info, fileCount=3, fileSize=102.7 M, priority=5, time=18808136059943550; duration=15sec 2017-05-24 01:10:24,667 INFO [sync.3] wal.FSHLog: Slow sync cost: 115 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:10:37,985 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588186458 with entries=625, filesize=121.98 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588237843 2017-05-24 01:10:45,551 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.59 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:10:46,476 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55144504, memsize=128.6 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/3167841ee4824581a57d27cfea4dbf18 2017-05-24 01:10:46,494 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/3167841ee4824581a57d27cfea4dbf18, entries=124615, sequenceid=55144504, filesize=19.7 M 2017-05-24 01:10:46,496 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.59 MB/134838912, currentsize=1.60 MB/1680808 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 945ms, sequenceid=55144504, compaction requested=true 2017-05-24 01:11:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.92 GB, freeSize=63.01 MB, max=3.98 GB, blockCount=63471, accesses=140714373, hits=71010806, hitRatio=50.46%, , cachingAccesses=136732141, cachingHits=70953219, cachingHitsRatio=51.89%, evictions=28168, evicted=65482430, evictedPerRun=2324.7099609375 2017-05-24 01:11:26,578 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.45 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:11:27,658 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=77249, memsize=128.5 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/bc452a3c70e949b7b6e1d6c6c073f5f4 2017-05-24 01:11:27,682 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/bc452a3c70e949b7b6e1d6c6c073f5f4, entries=659876, sequenceid=77249, filesize=6.5 M 2017-05-24 01:11:27,684 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.45 MB/134691000, currentsize=21.75 MB/22803952 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 1106ms, sequenceid=77249, compaction requested=true 2017-05-24 01:11:27,692 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. 2017-05-24 01:11:27,692 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp, totalSize=115.9 M 2017-05-24 01:11:27,696 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=61916, currentSize=4104983168, freeSize=168713376, maxSize=4273497856, heapSize=4104519608, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:11:31,866 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588237843 with entries=1248, filesize=123.21 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588291783 2017-05-24 01:11:37,774 INFO [sync.1] wal.FSHLog: Slow sync cost: 833 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:11:46,079 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into 57a8f0f5f51f487a80a68a61d0908522(size=115.9 M), total size for store is 712.2 M. This selection was in queue for 0sec, and took 18sec to execute. 2017-05-24 01:11:46,080 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., storeName=info, fileCount=3, fileSize=115.9 M, priority=5, time=18808233405784829; duration=18sec 2017-05-24 01:12:34,783 INFO [sync.3] wal.FSHLog: Slow sync cost: 312 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:10,173 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.76 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:13:11,073 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55144827, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/0f78763f76e94419ad0ab2fc2eee0812 2017-05-24 01:13:11,097 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/0f78763f76e94419ad0ab2fc2eee0812, entries=124615, sequenceid=55144827, filesize=19.7 M 2017-05-24 01:13:11,100 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.76 MB/135019728, currentsize=6.59 MB/6914576 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 927ms, sequenceid=55144827, compaction requested=true 2017-05-24 01:13:11,104 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:13:11,104 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:13:11,108 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62062, currentSize=4117034592, freeSize=156463264, maxSize=4273497856, heapSize=4117034592, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:13:12,606 INFO [sync.1] wal.FSHLog: Slow sync cost: 325 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:12,606 INFO [sync.2] wal.FSHLog: Slow sync cost: 320 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:12,618 INFO [sync.3] wal.FSHLog: Slow sync cost: 276 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:13,164 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into fbfa0bfd633f46f8b043621fafed2751(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 2sec to execute. 2017-05-24 01:13:13,164 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18808336817740778; duration=2sec 2017-05-24 01:13:32,908 INFO [sync.4] wal.FSHLog: Slow sync cost: 107 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:32,908 INFO [sync.0] wal.FSHLog: Slow sync cost: 107 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:37,339 INFO [sync.4] wal.FSHLog: Slow sync cost: 274 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:37,339 INFO [sync.0] wal.FSHLog: Slow sync cost: 229 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:13:41,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region AverageCampaignPerformance,33333333,1478428256763.573482bf2396e92f7ea555322e355f0c. after a delay of 4796 2017-05-24 01:13:41,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region AverageCampaignPerformance,cccccccc,1478428256763.0e45b3d62a15c443e71563ba6e8e5633. after a delay of 7500 2017-05-24 01:13:46,623 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for AverageCampaignPerformance,33333333,1478428256763.573482bf2396e92f7ea555322e355f0c., current region memstore size 5.89 KB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:13:46,659 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=4949, memsize=5.9 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/AverageCampaignPerformance/573482bf2396e92f7ea555322e355f0c/.tmp/678d795ade274fc0a8db8fce2fe5ed1c 2017-05-24 01:13:46,679 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/AverageCampaignPerformance/573482bf2396e92f7ea555322e355f0c/info/678d795ade274fc0a8db8fce2fe5ed1c, entries=36, sequenceid=4949, filesize=5.2 K 2017-05-24 01:13:46,680 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~5.89 KB/6032, currentsize=0 B/0 for region AverageCampaignPerformance,33333333,1478428256763.573482bf2396e92f7ea555322e355f0c. in 57ms, sequenceid=4949, compaction requested=false 2017-05-24 01:13:49,327 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for AverageCampaignPerformance,cccccccc,1478428256763.0e45b3d62a15c443e71563ba6e8e5633., current region memstore size 91.30 KB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:13:49,354 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=5142, memsize=91.3 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/AverageCampaignPerformance/0e45b3d62a15c443e71563ba6e8e5633/.tmp/2490fa0b6d9647bdb1cc580b93e67f51 2017-05-24 01:13:49,373 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/AverageCampaignPerformance/0e45b3d62a15c443e71563ba6e8e5633/info/2490fa0b6d9647bdb1cc580b93e67f51, entries=558, sequenceid=5142, filesize=8.6 K 2017-05-24 01:13:49,376 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~91.30 KB/93496, currentsize=0 B/0 for region AverageCampaignPerformance,cccccccc,1478428256763.0e45b3d62a15c443e71563ba6e8e5633. in 49ms, sequenceid=5142, compaction requested=false 2017-05-24 01:14:01,440 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588291783 with entries=4416, filesize=121.67 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588441384 2017-05-24 01:14:31,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. after a delay of 13426 2017-05-24 01:14:41,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. after a delay of 6233 2017-05-24 01:14:41,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignSummaryTrends_New,99999999,1478427970677.fea5fdce86225406c7dd0dd4c7e51a38. after a delay of 4029 2017-05-24 01:14:45,254 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504., current region memstore size 75.63 KB, and 2/10 column families' memstores are being flushed. 2017-05-24 01:14:45,254 INFO [MemStoreFlusher.0] regionserver.HRegion: Flushing Column Family: Em which was occupying 15.16 KB of memstore. 2017-05-24 01:14:45,254 INFO [MemStoreFlusher.0] regionserver.HRegion: Flushing Column Family: info which was occupying 16.96 KB of memstore. 2017-05-24 01:14:45,325 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=70336, memsize=14.8 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryFactsMDC/fcf0380fd986f7c3d3f3bf298e91e504/.tmp/659560d78db24e85afc3f2254815dfd4 2017-05-24 01:14:45,355 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=70336, memsize=16.6 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryFactsMDC/fcf0380fd986f7c3d3f3bf298e91e504/.tmp/616f15354c2848e197b1832be2f00362 2017-05-24 01:14:45,376 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryFactsMDC/fcf0380fd986f7c3d3f3bf298e91e504/Em/659560d78db24e85afc3f2254815dfd4, entries=54, sequenceid=70336, filesize=5.5 K 2017-05-24 01:14:45,387 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryFactsMDC/fcf0380fd986f7c3d3f3bf298e91e504/info/616f15354c2848e197b1832be2f00362, entries=70, sequenceid=70336, filesize=5.9 K 2017-05-24 01:14:45,392 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~31.33 KB/32080, currentsize=44.30 KB/45368 for region CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. in 138ms, sequenceid=70336, compaction requested=true 2017-05-24 01:14:45,396 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on Em in region CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. 2017-05-24 01:14:45,396 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in Em of CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryFactsMDC/fcf0380fd986f7c3d3f3bf298e91e504/.tmp, totalSize=70.2 K 2017-05-24 01:14:45,401 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=63356, currentSize=4204026560, freeSize=69471296, maxSize=4273497856, heapSize=4204026560, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:14:45,510 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 (all) file(s) in Em of CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. into 6775a3e6779b44918d37d270867e35f8(size=59.2 K), total size for store is 59.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2017-05-24 01:14:45,510 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504., storeName=Em, fileCount=3, fileSize=70.2 K, priority=7, time=18808431109664527; duration=0sec 2017-05-24 01:14:45,877 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for CampaignSummaryTrends_New,99999999,1478427970677.fea5fdce86225406c7dd0dd4c7e51a38., current region memstore size 78.70 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:14:46,104 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=204028125, memsize=78.7 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryTrends_New/fea5fdce86225406c7dd0dd4c7e51a38/.tmp/9ae26d1a321f44639dc12629fdd3d427 2017-05-24 01:14:46,124 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryTrends_New/fea5fdce86225406c7dd0dd4c7e51a38/info/9ae26d1a321f44639dc12629fdd3d427, entries=5217, sequenceid=204028125, filesize=75.5 K 2017-05-24 01:14:46,127 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~78.70 MB/82524088, currentsize=0 B/0 for region CampaignSummaryTrends_New,99999999,1478427970677.fea5fdce86225406c7dd0dd4c7e51a38. in 250ms, sequenceid=204028125, compaction requested=false 2017-05-24 01:14:54,541 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 129.20 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:14:55,336 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55145151, memsize=129.2 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/07c7e3995fb54fe5890d47c24ba03b3a 2017-05-24 01:14:55,353 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/07c7e3995fb54fe5890d47c24ba03b3a, entries=124615, sequenceid=55145151, filesize=19.7 M 2017-05-24 01:14:55,357 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~129.20 MB/135477656, currentsize=5.36 MB/5621704 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 816ms, sequenceid=55145151, compaction requested=true 2017-05-24 01:15:49,001 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588441384 with entries=721, filesize=121.94 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588548958 2017-05-24 01:15:49,002 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495584694998 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495584694998 2017-05-24 01:15:49,012 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587067784 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587067784 2017-05-24 01:15:49,023 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587496042 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587496042 2017-05-24 01:15:49,033 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587569272 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587569272 2017-05-24 01:15:49,043 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588074802 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495588074802 2017-05-24 01:16:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.93 GB, freeSize=54.50 MB, max=3.98 GB, blockCount=63543, accesses=143481215, hits=72414747, hitRatio=50.47%, , cachingAccesses=139483906, cachingHits=72357155, cachingHitsRatio=51.87%, evictions=28690, evicted=66830190, evictedPerRun=2329.3896484375 2017-05-24 01:16:20,443 INFO [sync.0] wal.FSHLog: Slow sync cost: 150 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:16:21,294 INFO [sync.1] wal.FSHLog: Slow sync cost: 130 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:16:21,532 INFO [sync.4] wal.FSHLog: Slow sync cost: 118 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:16:43,989 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.84 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:16:44,779 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55145470, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/85e4f3d1a3da4c96979206dbbf3b0f3c 2017-05-24 01:16:44,828 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/85e4f3d1a3da4c96979206dbbf3b0f3c, entries=124615, sequenceid=55145470, filesize=19.7 M 2017-05-24 01:16:44,830 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.84 MB/135100360, currentsize=5.26 MB/5519072 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 841ms, sequenceid=55145470, compaction requested=true 2017-05-24 01:16:44,835 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:16:44,835 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:16:44,839 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62764, currentSize=4164653728, freeSize=108844128, maxSize=4273497856, heapSize=4164653728, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:16:46,612 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 8028c9109d9646b5a8cfcf1ba61ddb53(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 01:16:46,612 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18808550548661966; duration=1sec 2017-05-24 01:16:50,784 INFO [sync.0] wal.FSHLog: Slow sync cost: 326 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:16:50,786 INFO [sync.1] wal.FSHLog: Slow sync cost: 288 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:16:51,944 INFO [sync.3] wal.FSHLog: Slow sync cost: 412 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:17:06,748 INFO [sync.4] wal.FSHLog: Slow sync cost: 129 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:17:07,807 INFO [sync.0] wal.FSHLog: Slow sync cost: 1054 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:17:42,194 INFO [sync.2] wal.FSHLog: Slow sync cost: 359 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:17:57,548 INFO [sync.2] wal.FSHLog: Slow sync cost: 222 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]] 2017-05-24 01:18:24,893 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588548958 with entries=3460, filesize=121.89 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588704841 2017-05-24 01:18:25,197 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.74 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:18:26,017 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55145790, memsize=128.7 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/496af3ec5a2246e698b5d01596aa74e5 2017-05-24 01:18:26,038 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/496af3ec5a2246e698b5d01596aa74e5, entries=124615, sequenceid=55145790, filesize=19.7 M 2017-05-24 01:18:26,040 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.74 MB/134997728, currentsize=5.55 MB/5823464 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 843ms, sequenceid=55145790, compaction requested=true 2017-05-24 01:19:29,289 INFO [sync.3] wal.FSHLog: Slow sync cost: 132 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK]] 2017-05-24 01:19:32,090 INFO [sync.4] wal.FSHLog: Slow sync cost: 157 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK]] 2017-05-24 01:19:41,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. after a delay of 5540 2017-05-24 01:19:47,367 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504., current region memstore size 51.87 KB, and 1/10 column families' memstores are being flushed. 2017-05-24 01:19:47,367 INFO [MemStoreFlusher.0] regionserver.HRegion: Flushing Column Family: Mob which was occupying 47.66 KB of memstore. 2017-05-24 01:19:47,447 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=70344, memsize=47.3 K, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryFactsMDC/fcf0380fd986f7c3d3f3bf298e91e504/.tmp/7bc77a310a114b339eae77af292eb6d7 2017-05-24 01:19:47,476 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/CampaignSummaryFactsMDC/fcf0380fd986f7c3d3f3bf298e91e504/Mob/7bc77a310a114b339eae77af292eb6d7, entries=198, sequenceid=70344, filesize=7.1 K 2017-05-24 01:19:47,478 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~47.26 KB/48392, currentsize=4.61 KB/4720 for region CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. in 111ms, sequenceid=70344, compaction requested=false 2017-05-24 01:20:10,796 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 129.03 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:20:11,624 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55146109, memsize=129.0 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/3b7d40d1a0f54569b9c4e3578b7a5ddb 2017-05-24 01:20:11,651 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/3b7d40d1a0f54569b9c4e3578b7a5ddb, entries=124615, sequenceid=55146109, filesize=19.7 M 2017-05-24 01:20:11,654 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~129.03 MB/135302120, currentsize=3.55 MB/3723896 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 858ms, sequenceid=55146109, compaction requested=true 2017-05-24 01:20:11,661 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:20:11,661 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:20:11,666 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62462, currentSize=4144144776, freeSize=129353080, maxSize=4273497856, heapSize=4144144776, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:20:13,374 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 6c0bc4c15ba7435c950b707a4a806c0b(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 1sec to execute. 2017-05-24 01:20:13,374 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18808757374555534; duration=1sec 2017-05-24 01:20:15,602 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588704841 with entries=2097, filesize=121.85 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588815559 2017-05-24 01:20:15,604 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585033447 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495585033447 2017-05-24 01:20:15,617 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586348846 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495586348846 2017-05-24 01:20:15,630 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586524149 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495586524149 2017-05-24 01:20:15,643 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586804332 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495586804332 2017-05-24 01:20:15,655 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586914366 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495586914366 2017-05-24 01:20:15,667 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587343879 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587343879 2017-05-24 01:20:15,681 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587816152 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587816152 2017-05-24 01:20:15,724 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587924986 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587924986 2017-05-24 01:20:43,419 INFO [sync.0] wal.FSHLog: Slow sync cost: 294 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.89 GB, freeSize=89.43 MB, max=3.98 GB, blockCount=63004, accesses=145617041, hits=73504508, hitRatio=50.48%, , cachingAccesses=141616192, cachingHits=73446916, cachingHitsRatio=51.86%, evictions=29094, evicted=67873255, evictedPerRun=2332.895263671875 2017-05-24 01:21:06,696 INFO [sync.2] wal.FSHLog: Slow sync cost: 910 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:06,696 INFO [sync.3] wal.FSHLog: Slow sync cost: 896 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:11,264 INFO [sync.3] wal.FSHLog: Slow sync cost: 215 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:16,076 INFO [sync.0] wal.FSHLog: Slow sync cost: 271 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:20,210 INFO [sync.3] wal.FSHLog: Slow sync cost: 104 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:20,477 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.32 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:21:21,461 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=77371, memsize=128.3 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/10811f0214314354baeca2f6e31a287b 2017-05-24 01:21:21,482 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/10811f0214314354baeca2f6e31a287b, entries=659288, sequenceid=77371, filesize=6.5 M 2017-05-24 01:21:21,487 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.32 MB/134551280, currentsize=24.08 MB/25247128 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 1010ms, sequenceid=77371, compaction requested=true 2017-05-24 01:21:22,540 INFO [sync.0] wal.FSHLog: Slow sync cost: 496 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:22,541 INFO [sync.1] wal.FSHLog: Slow sync cost: 465 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:21:27,154 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.35 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:21:28,240 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=77492, memsize=128.3 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/35abc65dedae4088a65a70f288d6bf6a 2017-05-24 01:21:28,282 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/35abc65dedae4088a65a70f288d6bf6a, entries=656460, sequenceid=77492, filesize=6.6 M 2017-05-24 01:21:28,288 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.35 MB/134581744, currentsize=26.28 MB/27554624 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 1134ms, sequenceid=77492, compaction requested=true 2017-05-24 01:21:28,369 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588815559 with entries=1637, filesize=122.90 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588888263 2017-05-24 01:21:33,035 INFO [sync.2] wal.FSHLog: Slow sync cost: 136 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:21:33,035 INFO [sync.0] wal.FSHLog: Slow sync cost: 3695 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:21:33,035 INFO [sync.1] wal.FSHLog: Slow sync cost: 3676 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:21:47,478 INFO [sync.1] wal.FSHLog: Slow sync cost: 136 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:21:52,607 INFO [B.defaultRpcServer.handler=33,queue=3,port=16020] shortcircuit.ShortCircuitCache: ShortCircuitCache(0x5c5d6b82): could not load 1080528826_BP-1810172115-hadoop2-1478343078462 due to InvalidToken exception. org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/SMSCampaignStatus/13f0709e1757c082c1d370b61d4c2264/info/111079aa453d48e5aa653ea489d45f78 at org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:589) at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:488) at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:784) at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:718) at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1407) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1677) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:910) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:267) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:169) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2075) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5467) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2604) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2590) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2572) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2273) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:21:52,608 INFO [B.defaultRpcServer.handler=33,queue=3,port=16020] hdfs.DFSClient: Access token was invalid when connecting to /hadoop6:50010 : org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/SMSCampaignStatus/13f0709e1757c082c1d370b61d4c2264/info/111079aa453d48e5aa653ea489d45f78 2017-05-24 01:21:52,744 INFO [B.defaultRpcServer.handler=33,queue=3,port=16020] shortcircuit.ShortCircuitCache: ShortCircuitCache(0x5c5d6b82): could not load 1080658122_BP-1810172115-hadoop2-1478343078462 due to InvalidToken exception. org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/SMSCampaignStatus/13f0709e1757c082c1d370b61d4c2264/info/0beeef84ad3e4e89ad59df48bf9f8b5b at org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:589) at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:488) at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:784) at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:718) at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1407) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1677) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:910) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:267) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:169) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2075) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5467) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2604) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2590) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2572) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2273) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:21:52,745 INFO [B.defaultRpcServer.handler=33,queue=3,port=16020] hdfs.DFSClient: Access token was invalid when connecting to /hadoop6:50010 : org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/SMSCampaignStatus/13f0709e1757c082c1d370b61d4c2264/info/0beeef84ad3e4e89ad59df48bf9f8b5b 2017-05-24 01:22:02,530 INFO [sync.2] wal.FSHLog: Slow sync cost: 266 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:22:02,531 INFO [sync.3] wal.FSHLog: Slow sync cost: 233 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:22:06,116 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.86 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:22:10,702 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55146429, memsize=128.9 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/5d5bd8ce4d3944bc95a223de4721d3ae 2017-05-24 01:22:10,744 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/5d5bd8ce4d3944bc95a223de4721d3ae, entries=124615, sequenceid=55146429, filesize=19.7 M 2017-05-24 01:22:10,749 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.86 MB/135121768, currentsize=18.06 MB/18934288 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 4633ms, sequenceid=55146429, compaction requested=true 2017-05-24 01:22:33,072 INFO [sync.3] wal.FSHLog: Slow sync cost: 3175 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:22:42,256 WARN [B.defaultRpcServer.handler=39,queue=4,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495588946657,"responsesize":416,"method":"Scan","processingtimems":15599,"client":"hadoop6:41260","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:22:44,590 INFO [sync.3] wal.FSHLog: Slow sync cost: 461 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:22:51,477 INFO [sync.4] wal.FSHLog: Slow sync cost: 1171 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:23:19,751 WARN [B.defaultRpcServer.handler=42,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495588962257,"responsesize":416,"method":"Scan","processingtimems":37494,"client":"hadoop6:41260","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:23:36,964 WARN [B.defaultRpcServer.handler=27,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589001569,"responsesize":416,"method":"Scan","processingtimems":15395,"client":"hadoop7:44330","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:23:46,110 WARN [B.defaultRpcServer.handler=23,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589013716,"responsesize":416,"method":"Scan","processingtimems":12394,"client":"hadoop6:41260","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:24:01,826 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignPerformance,33333333,1478428211809.c960d82b973a90d1cc2500ee727353a9. after a delay of 19685 2017-05-24 01:24:11,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore requesting flush for region CampaignPerformance,33333333,1478428211809.c960d82b973a90d1cc2500ee727353a9. after a delay of 5029 2017-05-24 01:24:12,615 WARN [B.defaultRpcServer.handler=47,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589039156,"responsesize":416,"method":"Scan","processingtimems":13459,"client":"hadoop4:34552","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:24:12,619 WARN [B.defaultRpcServer.handler=27,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589039161,"responsesize":416,"method":"Scan","processingtimems":13458,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:24:14,151 INFO [sync.1] wal.FSHLog: Slow sync cost: 1274 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:24:14,509 INFO [sync.0] wal.FSHLog: Slow sync cost: 190 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:24:19,732 INFO [sync.4] wal.FSHLog: Slow sync cost: 115 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:24:21,512 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for CampaignPerformance,33333333,1478428211809.c960d82b973a90d1cc2500ee727353a9., current region memstore size 2.32 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:24:22,531 INFO [sync.1] wal.FSHLog: Slow sync cost: 1066 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:24:22,531 INFO [sync.2] wal.FSHLog: Slow sync cost: 1018 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:24:36,796 INFO [sync.2] wal.FSHLog: Slow sync cost: 884 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:24:53,843 WARN [B.defaultRpcServer.handler=2,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589054686,"responsesize":416,"method":"Scan","processingtimems":39157,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:24:59,435 WARN [MemStoreFlusher.0] hdfs.DFSClient: Slow waitForAckedSeqno took 36751ms (threshold=30000ms) 2017-05-24 01:24:59,450 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=6992801, memsize=2.3 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/CampaignPerformance/c960d82b973a90d1cc2500ee727353a9/.tmp/ffbe0bd604f943259f90f30e2fb3bbfa 2017-05-24 01:24:59,481 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/CampaignPerformance/c960d82b973a90d1cc2500ee727353a9/info/ffbe0bd604f943259f90f30e2fb3bbfa, entries=1960, sequenceid=6992801, filesize=24.4 K 2017-05-24 01:24:59,482 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~2.32 MB/2430064, currentsize=0 B/0 for region CampaignPerformance,33333333,1478428211809.c960d82b973a90d1cc2500ee727353a9. in 37970ms, sequenceid=6992801, compaction requested=false 2017-05-24 01:25:00,369 WARN [B.defaultRpcServer.handler=4,queue=4,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589070166,"responsesize":416,"method":"Scan","processingtimems":30203,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:25:00,371 WARN [B.defaultRpcServer.handler=28,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589070169,"responsesize":416,"method":"Scan","processingtimems":30202,"client":"hadoop4:34552","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:25:01,980 WARN [B.defaultRpcServer.handler=42,queue=2,port=16020] hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:38498 remote=/hadoop7:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:642) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:622) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:280) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:194) at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:268) at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:815) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:792) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:592) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:25:01,983 WARN [B.defaultRpcServer.handler=42,queue=2,port=16020] hdfs.DFSClient: Failed to connect to /hadoop7:50010 for block, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:38498 remote=/hadoop7:50010] java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:38498 remote=/hadoop7:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:642) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:622) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:280) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:194) at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:268) at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:815) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:792) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:592) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:25:01,985 INFO [B.defaultRpcServer.handler=42,queue=2,port=16020] hdfs.DFSClient: Successfully connected to /hadoop5:50010 for BP-1810172115-hadoop2-1478343078462:blk_1080711628_6995865 2017-05-24 01:25:02,307 WARN [B.defaultRpcServer.handler=42,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589041917,"responsesize":416,"method":"Scan","processingtimems":60390,"client":"hadoop6:41260","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:25:31,481 WARN [B.defaultRpcServer.handler=10,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589114472,"responsesize":416,"method":"Scan","processingtimems":17009,"client":"hadoop6:41260","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:25:43,366 INFO [sync.3] wal.FSHLog: Slow sync cost: 185 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:25:45,147 INFO [sync.4] wal.FSHLog: Slow sync cost: 1774 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:25:45,725 INFO [sync.1] wal.FSHLog: Slow sync cost: 519 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:25:50,925 WARN [B.defaultRpcServer.handler=33,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589136066,"responsesize":416,"method":"Scan","processingtimems":14859,"client":"hadoop7:44330","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:25:51,638 INFO [sync.0] wal.FSHLog: Slow sync cost: 1426 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:25:53,721 INFO [sync.1] wal.FSHLog: Slow sync cost: 263 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:25:55,818 WARN [B.defaultRpcServer.handler=48,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589145537,"responsesize":416,"method":"Scan","processingtimems":10281,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:26:01,295 INFO [sync.0] wal.FSHLog: Slow sync cost: 312 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:26:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.83 GB, freeSize=153.69 MB, max=3.98 GB, blockCount=61987, accesses=146508003, hits=73943340, hitRatio=50.47%, , cachingAccesses=142480035, cachingHits=73885744, cachingHitsRatio=51.86%, evictions=29259, evicted=68299288, evictedPerRun=2334.300048828125 2017-05-24 01:26:09,567 WARN [B.defaultRpcServer.handler=29,queue=4,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589155819,"responsesize":416,"method":"Scan","processingtimems":13748,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:26:11,912 WARN [B.defaultRpcServer.handler=23,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589156386,"responsesize":416,"method":"Scan","processingtimems":15526,"client":"hadoop6:41260","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:26:11,957 INFO [sync.4] wal.FSHLog: Slow sync cost: 2284 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:26:15,086 INFO [sync.0] wal.FSHLog: Slow sync cost: 127 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:26:24,545 INFO [sync.3] wal.FSHLog: Slow sync cost: 253 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:26:39,151 INFO [sync.0] wal.FSHLog: Slow sync cost: 1263 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:26:48,384 INFO [sync.2] wal.FSHLog: Slow sync cost: 6854 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:26:50,164 INFO [sync.4] wal.FSHLog: Slow sync cost: 395 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:26:55,835 WARN [B.defaultRpcServer.handler=25,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589184255,"responsesize":416,"method":"Scan","processingtimems":31580,"client":"hadoop6:33918","queuetimems":1,"class":"HRegionServer"} 2017-05-24 01:26:58,953 INFO [sync.0] wal.FSHLog: Slow sync cost: 575 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:27:00,397 INFO [sync.0] wal.FSHLog: Slow sync cost: 179 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:27:01,765 INFO [sync.2] wal.FSHLog: Slow sync cost: 1289 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:27:10,806 INFO [sync.4] wal.FSHLog: Slow sync cost: 240 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:27:22,184 INFO [sync.1] wal.FSHLog: Slow sync cost: 2130 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:27:30,925 INFO [sync.2] wal.FSHLog: Slow sync cost: 523 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:27:46,114 INFO [sync.3] wal.FSHLog: Slow sync cost: 3612 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:27:51,375 WARN [B.defaultRpcServer.handler=41,queue=1,port=16020] hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:46378 remote=/hadoop1:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:719) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:844) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:839) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:856) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:876) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:152) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:629) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:27:51,376 WARN [B.defaultRpcServer.handler=41,queue=1,port=16020] hdfs.DFSClient: Failed to connect to /hadoop1:50010 for block, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:46378 remote=/hadoop1:50010] java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:46378 remote=/hadoop1:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:719) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:844) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:839) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:856) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:876) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:152) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:629) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:27:51,377 INFO [B.defaultRpcServer.handler=41,queue=1,port=16020] hdfs.DFSClient: Successfully connected to /hadoop4:50010 for BP-1810172115-hadoop2-1478343078462:blk_1078391834_4662877 2017-05-24 01:27:51,571 WARN [B.defaultRpcServer.handler=32,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589260593,"responsesize":416,"method":"Scan","processingtimems":10978,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:27:51,632 WARN [B.defaultRpcServer.handler=7,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589250508,"responsesize":416,"method":"Scan","processingtimems":21124,"client":"hadoop6:41260","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:27:51,828 WARN [B.defaultRpcServer.handler=41,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589211315,"responsesize":416,"method":"Scan","processingtimems":60513,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:27:57,256 INFO [sync.0] wal.FSHLog: Slow sync cost: 1506 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:28:07,820 INFO [sync.4] wal.FSHLog: Slow sync cost: 1251 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:28:09,414 INFO [sync.1] wal.FSHLog: Slow sync cost: 130 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:28:20,878 INFO [sync.0] wal.FSHLog: Slow sync cost: 858 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:28:32,133 INFO [sync.3] wal.FSHLog: Slow sync cost: 414 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:28:35,550 WARN [B.defaultRpcServer.handler=33,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589301658,"responsesize":416,"method":"Scan","processingtimems":13891,"client":"hadoop4:34552","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:28:40,309 WARN [B.defaultRpcServer.handler=49,queue=4,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589302312,"responsesize":416,"method":"Scan","processingtimems":17997,"client":"hadoop7:44330","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:28:51,894 WARN [B.defaultRpcServer.handler=27,queue=2,port=16020] hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:33902 remote=/hadoop4:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:642) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:622) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:280) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:194) at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:268) at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:815) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:792) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:592) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:28:51,895 WARN [B.defaultRpcServer.handler=27,queue=2,port=16020] hdfs.DFSClient: Failed to connect to /hadoop4:50010 for block, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:33902 remote=/hadoop4:50010] java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:33902 remote=/hadoop4:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:642) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:622) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:280) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:194) at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:268) at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:815) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:792) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:592) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:29:14,483 WARN [B.defaultRpcServer.handler=43,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589343234,"responsesize":416,"method":"Scan","processingtimems":11249,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:29:31,979 INFO [B.defaultRpcServer.handler=27,queue=2,port=16020] hdfs.DFSClient: Successfully connected to /hadoop5:50010 for BP-1810172115-hadoop2-1478343078462:blk_1078391834_4662877 2017-05-24 01:29:31,979 WARN [B.defaultRpcServer.handler=22,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589354485,"responsesize":416,"method":"Scan","processingtimems":17494,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:29:32,516 WARN [B.defaultRpcServer.handler=27,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589271830,"responsesize":148,"method":"Scan","processingtimems":100686,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:29:38,986 WARN [B.defaultRpcServer.handler=15,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589358904,"responsesize":416,"method":"Scan","processingtimems":20082,"client":"hadoop7:44330","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:29:38,986 WARN [B.defaultRpcServer.handler=47,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589360318,"responsesize":416,"method":"Scan","processingtimems":18668,"client":"hadoop5:45224","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:30:35,545 WARN [B.defaultRpcServer.handler=15,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589411342,"responsesize":416,"method":"Scan","processingtimems":24203,"client":"hadoop4:34552","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:30:41,006 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 15464 ms, current pipeline: [] 2017-05-24 01:30:53,521 WARN [B.defaultRpcServer.handler=38,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589437192,"responsesize":416,"method":"Scan","processingtimems":16329,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:30:53,784 WARN [B.defaultRpcServer.handler=10,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589437195,"responsesize":416,"method":"Scan","processingtimems":16589,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:30:54,176 INFO [sync.0] wal.FSHLog: Slow sync cost: 13169 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:30:54,176 INFO [sync.4] wal.FSHLog: Slow sync cost: 25899 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]] 2017-05-24 01:30:54,177 WARN [B.defaultRpcServer.handler=46,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495589428277,"responsesize":8,"method":"Multi","processingtimems":25900,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:30:54,211 INFO [sync.3] wal.FSHLog: Slow sync cost: 27418 ms, current pipeline: [] 2017-05-24 01:30:54,211 WARN [B.defaultRpcServer.handler=1,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495589426781,"responsesize":603,"method":"Multi","processingtimems":27430,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:30:54,214 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588888263 with entries=6311, filesize=128.48 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495589425426 2017-05-24 01:30:54,216 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585248429 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495585248429 2017-05-24 01:30:54,256 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585495885 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495585495885 2017-05-24 01:30:54,286 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495585631672 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495585631672 2017-05-24 01:30:54,327 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586052493 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495586052493 2017-05-24 01:30:54,366 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586172137 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495586172137 2017-05-24 01:30:54,396 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495586642308 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495586642308 2017-05-24 01:30:54,425 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587190237 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587190237 2017-05-24 01:30:54,458 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495587659710 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495587659710 2017-05-24 01:30:54,490 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495588237843 to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899.default.1495588237843 2017-05-24 01:31:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.81 GB, freeSize=171.11 MB, max=3.98 GB, blockCount=61709, accesses=147411513, hits=74418695, hitRatio=50.48%, , cachingAccesses=143360422, cachingHits=74361083, cachingHitsRatio=51.87%, evictions=29416, evicted=68704614, evictedPerRun=2335.62060546875 2017-05-24 01:31:02,009 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.84 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:31:05,582 INFO [sync.0] wal.FSHLog: Slow sync cost: 429 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:31:06,718 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55146748, memsize=128.8 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/df72fcfffd644c1887932cb1cdd0e010 2017-05-24 01:31:06,758 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/df72fcfffd644c1887932cb1cdd0e010, entries=124615, sequenceid=55146748, filesize=19.7 M 2017-05-24 01:31:06,766 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.84 MB/135101448, currentsize=23.03 MB/24146648 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 4757ms, sequenceid=55146748, compaction requested=true 2017-05-24 01:31:06,774 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:31:06,774 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:31:06,814 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=61934, currentSize=4108979160, freeSize=164452480, maxSize=4273497856, heapSize=4109045376, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:31:40,963 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into 606cf694816747f2ae6b908e2b88484b(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 34sec to execute. 2017-05-24 01:31:40,963 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18809412487818627; duration=34sec 2017-05-24 01:31:45,569 INFO [sync.4] wal.FSHLog: Slow sync cost: 546 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:32:09,103 WARN [B.defaultRpcServer.handler=37,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589499668,"responsesize":416,"method":"Scan","processingtimems":29435,"client":"hadoop6:37372","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:32:09,126 WARN [B.defaultRpcServer.handler=16,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589499659,"responsesize":416,"method":"Scan","processingtimems":29467,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:32:09,140 INFO [RS_OPEN_META-aps-hadoop6:16020-0-MetaLogRoller] wal.FSHLog: Slow sync cost: 19078 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-0ad77247-babc-4c36-9cd0-c04ad47e0894,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK]] 2017-05-24 01:32:12,342 INFO [sync.4] wal.FSHLog: Slow sync cost: 887 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:32:15,135 INFO [sync.2] wal.FSHLog: Slow sync cost: 591 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:32:15,135 INFO [sync.3] wal.FSHLog: Slow sync cost: 341 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:32:25,260 INFO [sync.3] wal.FSHLog: Slow sync cost: 679 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:32:30,759 INFO [RS_OPEN_META-aps-hadoop6:16020-0-MetaLogRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899..meta.1495585907617.meta with entries=0, filesize=91 B; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899..meta.1495589510041.meta 2017-05-24 01:32:30,760 INFO [RS_OPEN_META-aps-hadoop6:16020-0-MetaLogRoller] wal.FSHLog: Archiving hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899..meta.1495585907617.meta to hdfs://mycluster/apps/hbase/data/oldWALs/aps-hadoop6%2C16020%2C1495538759899..meta.1495585907617.meta 2017-05-24 01:32:34,277 INFO [sync.4] wal.FSHLog: Slow sync cost: 1573 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:33:00,507 INFO [sync.0] wal.FSHLog: Slow sync cost: 749 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:33:06,081 INFO [sync.2] wal.FSHLog: Slow sync cost: 499 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:33:44,800 WARN [B.defaultRpcServer.handler=31,queue=1,port=16020] hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:44248 remote=/hadoop5:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:719) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:844) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:839) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:856) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:876) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:152) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:629) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:33:44,801 WARN [B.defaultRpcServer.handler=31,queue=1,port=16020] hdfs.DFSClient: Failed to connect to /hadoop5:50010 for block, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:44248 remote=/hadoop5:50010] java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/hadoop6:44248 remote=/hadoop5:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:422) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:719) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:844) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:839) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:856) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:876) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:152) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:629) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5615) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5766) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5553) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2413) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:33:44,802 INFO [B.defaultRpcServer.handler=31,queue=1,port=16020] hdfs.DFSClient: Successfully connected to /hadoop1:50010 for BP-1810172115-hadoop2-1478343078462:blk_1080255947_6538013 2017-05-24 01:33:44,880 INFO [sync.0] wal.FSHLog: Slow sync cost: 424 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:33:45,163 WARN [B.defaultRpcServer.handler=31,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589564740,"responsesize":416,"method":"Scan","processingtimems":60423,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:34:06,116 INFO [sync.1] wal.FSHLog: Slow sync cost: 319 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:11,565 INFO [sync.0] wal.FSHLog: Slow sync cost: 103 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:17,057 INFO [sync.4] wal.FSHLog: Slow sync cost: 571 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:18,821 INFO [sync.2] wal.FSHLog: Slow sync cost: 111 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:20,785 INFO [sync.4] wal.FSHLog: Slow sync cost: 102 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:24,881 INFO [sync.0] wal.FSHLog: Slow sync cost: 140 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:27,383 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.39 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:34:36,513 INFO [sync.0] wal.FSHLog: Slow sync cost: 306 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:36,521 INFO [sync.1] wal.FSHLog: Slow sync cost: 299 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:49,663 INFO [sync.1] wal.FSHLog: Slow sync cost: 911 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:49,663 INFO [sync.0] wal.FSHLog: Slow sync cost: 912 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:52,364 WARN [B.defaultRpcServer.handler=46,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589664272,"responsesize":416,"method":"Scan","processingtimems":28092,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:34:52,711 INFO [sync.4] wal.FSHLog: Slow sync cost: 217 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:52,711 INFO [sync.0] wal.FSHLog: Slow sync cost: 184 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:54,314 INFO [sync.3] wal.FSHLog: Slow sync cost: 508 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:34:54,314 INFO [sync.2] wal.FSHLog: Slow sync cost: 565 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:01,779 INFO [sync.1] wal.FSHLog: Slow sync cost: 255 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:01,782 INFO [sync.2] wal.FSHLog: Slow sync cost: 223 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:03,561 INFO [sync.2] wal.FSHLog: Slow sync cost: 211 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:03,568 INFO [sync.3] wal.FSHLog: Slow sync cost: 215 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:09,897 WARN [regionserver/aps-hadoop6/hadoop6:16020.logRoller] hdfs.DFSClient: Slow waitForAckedSeqno took 40610ms (threshold=30000ms) 2017-05-24 01:35:09,917 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 40630 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:10,257 INFO [sync.1] wal.FSHLog: Slow sync cost: 339 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:10,257 INFO [sync.0] wal.FSHLog: Slow sync cost: 1793 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-120b90e1-5d80-447d-b8db-b8ea53661e88,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:35:13,428 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55147070, memsize=128.4 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/b0bbd82fdda54573ad748885d4be2d2b 2017-05-24 01:35:13,444 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495589425426 with entries=3282, filesize=124.92 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495589669260 2017-05-24 01:35:13,465 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/b0bbd82fdda54573ad748885d4be2d2b, entries=124615, sequenceid=55147070, filesize=19.7 M 2017-05-24 01:35:13,467 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.39 MB/134624944, currentsize=20.51 MB/21503632 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 46084ms, sequenceid=55147070, compaction requested=true 2017-05-24 01:35:15,154 INFO [sync.1] wal.FSHLog: Slow sync cost: 395 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:35:18,661 INFO [sync.1] wal.FSHLog: Slow sync cost: 2836 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:35:29,438 INFO [sync.4] wal.FSHLog: Slow sync cost: 211 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:35:34,735 INFO [sync.3] wal.FSHLog: Slow sync cost: 250 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:35:50,923 INFO [sync.3] wal.FSHLog: Slow sync cost: 6567 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:35:51,897 INFO [sync.3] wal.FSHLog: Slow sync cost: 792 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:35:54,346 INFO [sync.1] wal.FSHLog: Slow sync cost: 800 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:35:59,902 INFO [sync.1] wal.FSHLog: Slow sync cost: 3783 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:36:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.81 GB, freeSize=173.58 MB, max=3.98 GB, blockCount=61670, accesses=147844216, hits=74655243, hitRatio=50.50%, , cachingAccesses=143773060, cachingHits=74597627, cachingHitsRatio=51.89%, evictions=29485, evicted=68880750, evictedPerRun=2336.128662109375 2017-05-24 01:36:21,899 INFO [sync.0] wal.FSHLog: Slow sync cost: 1058 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:36:21,916 INFO [sync.1] wal.FSHLog: Slow sync cost: 103 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:36:28,056 WARN [B.defaultRpcServer.handler=43,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589776375,"responsesize":416,"method":"Scan","processingtimems":11681,"client":"hadoop5:45224","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:36:41,011 WARN [B.defaultRpcServer.handler=39,queue=4,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589790905,"responsesize":416,"method":"Scan","processingtimems":10106,"client":"hadoop6:33918","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:36:47,121 INFO [sync.2] wal.FSHLog: Slow sync cost: 5013 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:36:50,435 INFO [sync.4] wal.FSHLog: Slow sync cost: 3264 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:36:50,991 INFO [sync.3] wal.FSHLog: Slow sync cost: 403 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:10,706 INFO [sync.0] wal.FSHLog: Slow sync cost: 133 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:12,206 INFO [sync.3] wal.FSHLog: Slow sync cost: 915 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:20,954 INFO [sync.3] wal.FSHLog: Slow sync cost: 479 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:25,863 INFO [sync.0] wal.FSHLog: Slow sync cost: 1265 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:29,238 INFO [sync.0] wal.FSHLog: Slow sync cost: 3147 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:31,839 INFO [sync.1] wal.FSHLog: Slow sync cost: 724 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:38,954 INFO [sync.0] wal.FSHLog: Slow sync cost: 302 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:42,957 INFO [sync.1] wal.FSHLog: Slow sync cost: 2092 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:44,406 INFO [sync.2] wal.FSHLog: Slow sync cost: 717 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:49,382 INFO [sync.1] wal.FSHLog: Slow sync cost: 1345 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:51,009 INFO [sync.4] wal.FSHLog: Slow sync cost: 1521 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:37:54,611 INFO [sync.0] wal.FSHLog: Slow sync cost: 113 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:00,127 WARN [B.defaultRpcServer.handler=35,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589862701,"responsesize":416,"method":"Scan","processingtimems":17426,"client":"hadoop6:54120","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:38:01,438 INFO [sync.1] wal.FSHLog: Slow sync cost: 1950 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:02,171 WARN [B.defaultRpcServer.handler=7,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589835195,"responsesize":416,"method":"Scan","processingtimems":46976,"client":"hadoop5:45224","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:38:16,103 INFO [sync.0] wal.FSHLog: Slow sync cost: 481 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:16,103 INFO [sync.1] wal.FSHLog: Slow sync cost: 470 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:16,493 INFO [sync.2] wal.FSHLog: Slow sync cost: 142 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:16,495 INFO [sync.3] wal.FSHLog: Slow sync cost: 129 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:16,787 INFO [sync.2] wal.FSHLog: Slow sync cost: 128 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:16,791 INFO [sync.3] wal.FSHLog: Slow sync cost: 120 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:17,069 INFO [sync.2] wal.FSHLog: Slow sync cost: 124 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:17,069 INFO [sync.3] wal.FSHLog: Slow sync cost: 111 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:17,969 INFO [sync.2] wal.FSHLog: Slow sync cost: 100 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:18,644 INFO [sync.3] wal.FSHLog: Slow sync cost: 222 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:18,644 INFO [sync.4] wal.FSHLog: Slow sync cost: 176 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:19,020 INFO [sync.3] wal.FSHLog: Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:19,291 INFO [sync.2] wal.FSHLog: Slow sync cost: 121 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:19,295 INFO [sync.3] wal.FSHLog: Slow sync cost: 113 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:19,553 INFO [sync.2] wal.FSHLog: Slow sync cost: 120 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:20,322 INFO [sync.0] wal.FSHLog: Slow sync cost: 709 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:20,430 INFO [sync.1] wal.FSHLog: Slow sync cost: 671 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:20,434 INFO [sync.2] wal.FSHLog: Slow sync cost: 107 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:20,915 INFO [sync.2] wal.FSHLog: Slow sync cost: 101 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:24,075 INFO [sync.1] wal.FSHLog: Slow sync cost: 976 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:24,234 INFO [sync.2] wal.FSHLog: Slow sync cost: 906 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:24,234 INFO [sync.3] wal.FSHLog: Slow sync cost: 153 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:27,105 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., current region memstore size 128.08 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:38:32,170 INFO [sync.0] wal.FSHLog: Slow sync cost: 3032 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:33,546 WARN [B.defaultRpcServer.handler=9,queue=4,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589901598,"responsesize":416,"method":"Scan","processingtimems":11948,"client":"hadoop6:54120","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:38:36,889 INFO [sync.1] wal.FSHLog: Slow sync cost: 2394 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:36,929 INFO [sync.2] wal.FSHLog: Slow sync cost: 2386 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:41,343 INFO [sync.0] wal.FSHLog: Slow sync cost: 2743 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:41,842 INFO [sync.4] wal.FSHLog: Slow sync cost: 341 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:44,481 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=55147401, memsize=128.1 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp/1aecf03c1b1d412d94b6b4f45388c0c6 2017-05-24 01:38:44,545 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/info/1aecf03c1b1d412d94b6b4f45388c0c6, entries=124615, sequenceid=55147401, filesize=19.7 M 2017-05-24 01:38:44,552 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~128.08 MB/134297560, currentsize=15.91 MB/16684728 for region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. in 17447ms, sequenceid=55147401, compaction requested=true 2017-05-24 01:38:44,559 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:38:44,559 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/ORMDetails/96ab0ca6ba008280cd0cc841abeeb413/.tmp, totalSize=59.2 M 2017-05-24 01:38:45,058 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=62559, currentSize=4150306104, freeSize=123191752, maxSize=4273497856, heapSize=4150306104, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:38:56,524 INFO [sync.3] wal.FSHLog: Slow sync cost: 1947 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:58,916 INFO [sync.4] wal.FSHLog: Slow sync cost: 1091 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:38:59,160 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. into d005f950bf0149b7b4c4445a0b75e3e0(size=19.7 M), total size for store is 146.6 M. This selection was in queue for 0sec, and took 14sec to execute. 2017-05-24 01:38:59,161 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413., storeName=info, fileCount=3, fileSize=59.2 M, priority=6, time=18809870272731417; duration=14sec 2017-05-24 01:39:00,476 INFO [sync.3] wal.FSHLog: Slow sync cost: 928 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:00,603 INFO [sync.4] wal.FSHLog: Slow sync cost: 119 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:05,023 INFO [sync.1] wal.FSHLog: Slow sync cost: 387 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:13,158 INFO [sync.1] wal.FSHLog: Slow sync cost: 3650 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:17,679 INFO [sync.3] wal.FSHLog: Slow sync cost: 1515 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:23,475 INFO [sync.2] wal.FSHLog: Slow sync cost: 170 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:34,056 INFO [sync.1] wal.FSHLog: Slow sync cost: 512 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:37,754 INFO [sync.1] wal.FSHLog: Slow sync cost: 1374 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:42,078 INFO [sync.3] wal.FSHLog: Slow sync cost: 2585 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:43,825 INFO [sync.0] wal.FSHLog: Slow sync cost: 253 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:45,477 INFO [sync.1] wal.FSHLog: Slow sync cost: 975 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:49,149 INFO [sync.3] wal.FSHLog: Slow sync cost: 1501 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:49,366 INFO [sync.0] wal.FSHLog: Slow sync cost: 165 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:54,790 INFO [sync.3] wal.FSHLog: Slow sync cost: 539 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:39:59,010 INFO [sync.4] wal.FSHLog: Slow sync cost: 636 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:40:04,488 INFO [sync.2] wal.FSHLog: Slow sync cost: 240 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:40:04,505 INFO [sync.3] wal.FSHLog: Slow sync cost: 185 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:40:09,552 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 4633 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:40:10,616 INFO [sync.4] wal.FSHLog: Slow sync cost: 5100 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:40:10,636 INFO [sync.0] wal.FSHLog: Slow sync cost: 5114 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:40:10,636 INFO [sync.1] wal.FSHLog: Slow sync cost: 1083 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-01d24dbb-ac60-4a67-8c97-11733f4d3f3b,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK]] 2017-05-24 01:40:28,520 WARN [B.defaultRpcServer.handler=38,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495589995897,"responsesize":416,"method":"Scan","processingtimems":32623,"client":"hadoop5:45224","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:40:28,656 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495589669260 with entries=4827, filesize=127.65 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590004903 2017-05-24 01:40:40,733 INFO [sync.2] wal.FSHLog: Slow sync cost: 12071 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:40:40,733 WARN [B.defaultRpcServer.handler=31,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590010873,"responsesize":603,"method":"Multi","processingtimems":29860,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:40:40,733 WARN [B.defaultRpcServer.handler=23,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590010620,"responsesize":8,"method":"Multi","processingtimems":30113,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:41:00,412 INFO [sync.1] wal.FSHLog: Slow sync cost: 500 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:41:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.92 GB, freeSize=58.10 MB, max=3.98 GB, blockCount=63501, accesses=148052817, hits=74756911, hitRatio=50.49%, , cachingAccesses=143974764, cachingHits=74699294, cachingHitsRatio=51.88%, evictions=29528, evicted=68978956, evictedPerRun=2336.052490234375 2017-05-24 01:41:09,776 INFO [sync.4] wal.FSHLog: Slow sync cost: 732 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:41:35,726 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., current region memstore size 128.66 MB, and 1/1 column families' memstores are being flushed. 2017-05-24 01:41:40,675 INFO [sync.2] wal.FSHLog: Slow sync cost: 4069 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:42:54,085 WARN [ResponseProcessor for block BP-1810172115-hadoop2-1478343078462:blk_1080771646_7056178] hdfs.DFSClient: Slow ReadProcessor read fields took 69126ms (threshold=30000ms); ack: seqno: 106 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 34306019314 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK]] 2017-05-24 01:42:54,085 WARN [MemStoreFlusher.0] hdfs.DFSClient: Slow waitForAckedSeqno took 69439ms (threshold=30000ms) 2017-05-24 01:42:54,089 WARN [B.defaultRpcServer.handler=1,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495590146362,"responsesize":416,"method":"Scan","processingtimems":27727,"client":"hadoop5:45224","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:42:54,143 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=77613, memsize=128.7 M, hasBloomFilter=true, into tmp file hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp/b2dbc84725a1446981542aa6b6510c94 2017-05-24 01:42:54,224 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/info/b2dbc84725a1446981542aa6b6510c94, entries=662494, sequenceid=77613, filesize=6.5 M 2017-05-24 01:42:54,233 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~128.66 MB/134907112, currentsize=160.39 MB/168182736 for region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. in 78507ms, sequenceid=77613, compaction requested=true 2017-05-24 01:42:54,260 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HRegion: Starting compaction on info in region WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. 2017-05-24 01:42:54,260 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Starting compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into tmpdir=hdfs://mycluster/apps/hbase/data/data/default/WebAnalyticsUserFlow/64c90a0262dfa861044618e31604c3e3/.tmp, totalSize=19.6 M 2017-05-24 01:42:54,385 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=61240, currentSize=4062988064, freeSize=210509792, maxSize=4273497856, heapSize=4062988064, minSize=4059822848, minFactor=0.95, multiSize=2029911424, multiFactor=0.5, singleSize=1014955712, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false 2017-05-24 01:43:17,226 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.HStore: Completed compaction of 3 file(s) in info of WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. into ad9bb713861948e1bac0ceba454bfdac(size=19.6 M), total size for store is 731.7 M. This selection was in queue for 0sec, and took 22sec to execute. 2017-05-24 01:43:17,226 INFO [regionserver/aps-hadoop6/hadoop6:16020-shortCompactions-1495539087902] regionserver.CompactSplitThread: Completed compaction: Request = regionName=WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3., storeName=info, fileCount=3, fileSize=19.6 M, priority=4, time=18810119974049095; duration=22sec 2017-05-24 01:43:49,273 INFO [sync.3] wal.FSHLog: Slow sync cost: 5997 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:43:49,273 INFO [sync.2] wal.FSHLog: Slow sync cost: 13708 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:43:49,275 INFO [sync.4] wal.FSHLog: Slow sync cost: 3137 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:43:49,275 WARN [B.defaultRpcServer.handler=12,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590215564,"responsesize":8,"method":"Multi","processingtimems":13709,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:43:49,276 INFO [sync.0] wal.FSHLog: Slow sync cost: 3015 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:43:57,796 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 1738 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-6e0a3279-95d2-4bff-ac72-ed8e01ad47ab,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK]] 2017-05-24 01:44:33,707 WARN [sync.4] hdfs.DFSClient: Slow waitForAckedSeqno took 36545ms (threshold=30000ms) 2017-05-24 01:44:33,708 WARN [sync.1] hdfs.DFSClient: Slow waitForAckedSeqno took 35912ms (threshold=30000ms) 2017-05-24 01:44:33,708 WARN [sync.0] hdfs.DFSClient: Slow waitForAckedSeqno took 36530ms (threshold=30000ms) 2017-05-24 01:44:33,708 INFO [sync.1] wal.FSHLog: Slow sync cost: 35912 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 01:44:33,708 INFO [sync.0] wal.FSHLog: Slow sync cost: 36530 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-5c91e4fa-90df-4f20-8217-049e5d671c16,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-50b6de57-9040-41d4-80e9-bc62db9d1c5c,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb,DISK]] 2017-05-24 01:44:33,708 WARN [B.defaultRpcServer.handler=0,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590237178,"responsesize":8,"method":"Multi","processingtimems":36530,"client":"hadoop3:46942","queuetimems":1,"class":"HRegionServer"} 2017-05-24 01:44:35,610 INFO [sync.4] wal.FSHLog: Slow sync cost: 38448 ms, current pipeline: [DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:44:35,610 WARN [B.defaultRpcServer.handler=25,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590237153,"responsesize":423,"method":"Multi","processingtimems":38457,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:44:35,611 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590004903 with entries=1045, filesize=129.48 MB; new WAL /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037 2017-05-24 01:44:35,619 WARN [B.defaultRpcServer.handler=31,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590240069,"responsesize":8,"method":"Multi","processingtimems":35550,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:45:17,201 WARN [B.defaultRpcServer.handler=2,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1495590297580,"responsesize":416,"method":"Scan","processingtimems":19621,"client":"hadoop5:45224","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:45:36,344 WARN [ResponseProcessor for block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:748) 2017-05-24 01:45:36,344 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037 block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237] hdfs.DFSClient: Error Recovery for block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237 in pipeline DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]: bad datanode DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK] 2017-05-24 01:46:01,424 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.78 GB, freeSize=203.83 MB, max=3.98 GB, blockCount=61193, accesses=148279460, hits=74889338, hitRatio=50.51%, , cachingAccesses=144189395, cachingHits=74831721, cachingHitsRatio=51.90%, evictions=29569, evicted=69063468, evictedPerRun=2335.671630859375 2017-05-24 01:46:23,862 INFO [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037 block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237] hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Got error, status message , ack with firstBadLink as hadoop4:50010 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1217) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:46:23,864 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037 block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237] hdfs.DFSClient: Error Recovery for block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237 in pipeline DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK]: bad datanode DatanodeInfoWithStorage[hadoop4:50010,DS-1940f918-25a9-4136-8034-e6fd0972e5a2,DISK] 2017-05-24 01:47:17,004 INFO [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037 block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237] hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Got error, status message , ack with firstBadLink as hadoop5:50010 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1217) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,004 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037 block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237] hdfs.DFSClient: Error Recovery for block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237 in pipeline DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK]: bad datanode DatanodeInfoWithStorage[hadoop5:50010,DS-d7a093e4-6437-4935-b4a2-0decf44fabea,DISK] 2017-05-24 01:47:17,008 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037 block BP-1810172115-hadoop2-1478343078462:blk_1080771705_7056237] hdfs.DFSClient: DataStreamer Exception java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,008 WARN [sync.0] hdfs.DFSClient: Error while syncing java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,009 WARN [sync.1] hdfs.DFSClient: Error while syncing java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,009 WARN [sync.4] hdfs.DFSClient: Error while syncing java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,009 ERROR [sync.0] wal.FSHLog: Error syncing, request close of WAL java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,009 WARN [sync.2] hdfs.DFSClient: Error while syncing java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,010 ERROR [sync.2] wal.FSHLog: Error syncing, request close of WAL java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,009 WARN [sync.3] hdfs.DFSClient: Error while syncing java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,010 INFO [sync.2] wal.FSHLog: Slow sync cost: 71272 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:47:17,010 INFO [sync.0] wal.FSHLog: Slow sync cost: 161153 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:47:17,009 ERROR [sync.4] java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,009 ERROR [sync.1] wal.FSHLog: Error syncing, request close of WAL java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,010 INFO [sync.4] wal.FSHLog: Slow sync cost: 161377 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:47:17,010 INFO [sync.1] wal.FSHLog: Slow sync cost: 71274 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:47:17,010 ERROR [sync.3] wal.FSHLog: Error syncing, request close of WAL java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,010 INFO [sync.3] wal.FSHLog: Slow sync cost: 161386 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:47:17,011 ERROR [sync.3] wal.FSHLog: Error syncing, request close of WAL java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:17,011 WARN [B.defaultRpcServer.handler=18,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590275632,"responsesize":1254,"method":"Multi","processingtimems":161379,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:47:17,011 WARN [B.defaultRpcServer.handler=37,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590275856,"responsesize":1254,"method":"Multi","processingtimems":161155,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:47:17,011 WARN [B.defaultRpcServer.handler=48,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590365737,"responsesize":1254,"method":"Multi","processingtimems":71274,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:47:17,012 WARN [B.defaultRpcServer.handler=7,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590365958,"responsesize":1254,"method":"Multi","processingtimems":71054,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:47:17,018 WARN [B.defaultRpcServer.handler=39,queue=4,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590365733,"responsesize":37534,"method":"Multi","processingtimems":71284,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:47:17,019 WARN [B.defaultRpcServer.handler=33,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1495590275621,"responsesize":37534,"method":"Multi","processingtimems":161398,"client":"hadoop3:46942","queuetimems":0,"class":"HRegionServer"} 2017-05-24 01:47:17,114 WARN [regionserver/aps-hadoop6/hadoop6:16020.append-pool1-t1] wal.FSHLog: Failed appending 204070381, requesting roll of WAL java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:23,238 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Slow sync cost: 6203 ms, current pipeline: [DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]] 2017-05-24 01:47:23,239 ERROR [regionserver/aps-hadoop6/hadoop6:16020.logRoller] wal.FSHLog: Failed close of WAL writer hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037, unflushedEntries=11 org.apache.hadoop.hbase.regionserver.wal.FailedSyncBeforeLogCloseException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: On sync at org.apache.hadoop.hbase.regionserver.wal.FSHLog$SafePointZigZagLatch.waitSafePoint(FSHLog.java:1893) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:964) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:733) at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:148) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: On sync at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:2073) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1950) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Failed appending 204070381, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:2187) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:2028) ... 5 more Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:23,240 FATAL [regionserver/aps-hadoop6/hadoop6:16020.logRoller] regionserver.HRegionServer: ABORTING region server aps-hadoop6,16020,1495538759899: Failed log close in log roller org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://mycluster/apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590236037, unflushedEntries=11 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:1014) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:733) at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:148) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.regionserver.wal.FailedSyncBeforeLogCloseException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: On sync at org.apache.hadoop.hbase.regionserver.wal.FSHLog$SafePointZigZagLatch.waitSafePoint(FSHLog.java:1893) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:964) ... 3 more Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: On sync at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:2073) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1950) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Failed appending 204070381, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:2187) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:2028) ... 5 more Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:47:23,240 FATAL [regionserver/aps-hadoop6/hadoop6:16020.logRoller] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint, org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2017-05-24 01:47:23,297 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] regionserver.HRegionServer: Dump of metrics as JSON on abort: { "beans" : [ { "name" : "java.lang:type=Memory", "modelerType" : "sun.management.MemoryImpl", "Verbose" : true, "ObjectPendingFinalizationCount" : 0, "NonHeapMemoryUsage" : { "committed" : 98549760, "init" : 2555904, "max" : -1, "used" : 97145992 }, "HeapMemoryUsage" : { "committed" : 10683744256, "init" : 10737418240, "max" : 10683744256, "used" : 4996231736 }, "ObjectName" : "java.lang:type=Memory" } ], "beans" : [ { "name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC", "modelerType" : "RegionServer,sub=IPC", "tag.Context" : "regionserver", "tag.Hostname" : "aps-hadoop6", "queueSize" : 30, "numCallsInGeneralQueue" : 0, "numCallsInReplicationQueue" : 0, "numCallsInPriorityQueue" : 0, "numOpenConnections" : 3, "numActiveHandler" : 1, "receivedBytes" : 34683159890, "exceptions.RegionMovedException" : 0, "authenticationSuccesses" : 0, "authorizationFailures" : 0, "TotalCallTime_num_ops" : 65950230, "TotalCallTime_min" : 0, "TotalCallTime_max" : 169669, "TotalCallTime_mean" : 2.5443308992250673, "TotalCallTime_median" : 1.0, "TotalCallTime_75th_percentile" : 1.0, "TotalCallTime_95th_percentile" : 1.0, "TotalCallTime_99th_percentile" : 14.059999999999945, "exceptions.RegionTooBusyException" : 0, "exceptions.FailedSanityCheckException" : 0, "exceptions.UnknownScannerException" : 0, "exceptions.OutOfOrderScannerNextException" : 0, "exceptions" : 7, "ProcessCallTime_num_ops" : 65950230, "ProcessCallTime_min" : 0, "ProcessCallTime_max" : 169669, "ProcessCallTime_mean" : 2.515891392645636, "ProcessCallTime_median" : 0.0, "ProcessCallTime_75th_percentile" : 1.0, "ProcessCallTime_95th_percentile" : 1.0, "ProcessCallTime_99th_percentile" : 10.0, "exceptions.NotServingRegionException" : 7, "authorizationSuccesses" : 15645, "sentBytes" : 1998637078337, "QueueCallTime_num_ops" : 65950230, "QueueCallTime_min" : 0, "QueueCallTime_max" : 377, "QueueCallTime_mean" : 0.028439506579431187, "QueueCallTime_median" : 0.0, "QueueCallTime_75th_percentile" : 0.0, "QueueCallTime_95th_percentile" : 0.0, "QueueCallTime_99th_percentile" : 1.0, "authenticationFailures" : 0 } ], "beans" : [ { "name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication", "modelerType" : "RegionServer,sub=Replication", "tag.Context" : "regionserver", "tag.Hostname" : "aps-hadoop6", "sink.appliedHFiles" : 0, "sink.appliedOps" : 0, "sink.ageOfLastAppliedOp" : 0, "sink.appliedBatches" : 0 } ], "beans" : [ { "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server", "modelerType" : "RegionServer,sub=Server", "tag.zookeeperQuorum" : "aps-hadoop6:2181,aps-hadoop7:2181,aps-hadoop4:2181", "tag.serverName" : "aps-hadoop6,16020,1495538759899", "tag.clusterId" : "fea6e6f3-b462-4e4c-946c-f35c02a0d2f4", "tag.Context" : "regionserver", "tag.Hostname" : "aps-hadoop6", "regionCount" : 115, "storeCount" : 172, "hlogFileCount" : 12, "hlogFileSize" : 1303137333, "storeFileCount" : 125, "memStoreSize" : 320757168, "storeFileSize" : 37570070994, "regionServerStartTime" : 1495538759899, "totalRequestCount" : 75928618, "readRequestCount" : 6524892227, "writeRequestCount" : 10592168, "checkMutateFailedCount" : 0, "checkMutatePassedCount" : 0, "storeFileIndexSize" : 1003480, "staticIndexSize" : 165131576, "staticBloomSize" : 272818018, "mutationsWithoutWALCount" : 0, "mutationsWithoutWALSize" : 0, "percentFilesLocal" : 88, "percentFilesLocalSecondaryRegions" : 0, "splitQueueLength" : 0, "compactionQueueLength" : 0, "flushQueueLength" : 0, "blockCacheFreeSize" : 213601848, "blockCacheCount" : 61194, "blockCacheSize" : 4059896008, "blockCacheHitCount" : 74889581, "blockCacheHitCountPrimary" : 74889581, "blockCacheMissCount" : 73392762, "blockCacheMissCountPrimary" : 73392762, "blockCacheEvictionCount" : 69063468, "blockCacheEvictionCountPrimary" : 69063468, "blockCacheCountHitPercent" : 50.0, "blockCacheExpressHitPercent" : 51, "updatesBlockedTime" : 0, "flushedCellsCount" : 158027510, "compactedCellsCount" : 304941095, "majorCompactedCellsCount" : 53659350, "flushedCellsSize" : 62453732576, "compactedCellsSize" : 28607345292, "majorCompactedCellsSize" : 2567912808, "blockedRequestCount" : 0, "Mutate_num_ops" : 614022, "Mutate_min" : 0, "Mutate_max" : 169669, "Mutate_mean" : 18.75690610434154, "Mutate_median" : 2.0, "Mutate_75th_percentile" : 6.0, "Mutate_95th_percentile" : 65.0, "Mutate_99th_percentile" : 1718.5199999999895, "slowAppendCount" : 0, "slowDeleteCount" : 0, "Increment_num_ops" : 0, "Increment_min" : 0, "Increment_max" : 0, "Increment_mean" : 0.0, "Increment_median" : 0.0, "Increment_75th_percentile" : 0.0, "Increment_95th_percentile" : 0.0, "Increment_99th_percentile" : 0.0, "Replay_num_ops" : 0, "Replay_min" : 0, "Replay_max" : 0, "Replay_mean" : 0.0, "Replay_median" : 0.0, "Replay_75th_percentile" : 0.0, "Replay_95th_percentile" : 0.0, "Replay_99th_percentile" : 0.0, "FlushTime_num_ops" : 555, "FlushTime_min" : 659, "FlushTime_max" : 171246, "FlushTime_mean" : 9723.082882882883, "FlushTime_median" : 1034.0, "FlushTime_75th_percentile" : 10808.0, "FlushTime_95th_percentile" : 51941.0, "FlushTime_99th_percentile" : 91166.60000000011, "Delete_num_ops" : 2, "Delete_min" : 2, "Delete_max" : 3, "Delete_mean" : 2.5, "Delete_median" : 2.5, "Delete_75th_percentile" : 3.0, "Delete_95th_percentile" : 3.0, "Delete_99th_percentile" : 3.0, "splitRequestCount" : 0, "splitSuccessCount" : 0, "slowGetCount" : 0, "Get_num_ops" : 5, "Get_min" : 0, "Get_max" : 4, "Get_mean" : 2.0, "Get_median" : 2.0, "Get_75th_percentile" : 4.0, "Get_95th_percentile" : 4.0, "Get_99th_percentile" : 4.0, "ScanNext_num_ops" : 65294138, "ScanNext_min" : 0, "ScanNext_max" : 2474574, "ScanNext_mean" : 30176.540854264742, "ScanNext_median" : 18772.0, "ScanNext_75th_percentile" : 18800.0, "ScanNext_95th_percentile" : 93400.0, "ScanNext_99th_percentile" : 93601.0, "slowPutCount" : 1048, "slowIncrementCount" : 0, "Append_num_ops" : 0, "Append_min" : 0, "Append_max" : 0, "Append_mean" : 0.0, "Append_median" : 0.0, "Append_75th_percentile" : 0.0, "Append_95th_percentile" : 0.0, "Append_99th_percentile" : 0.0, "SplitTime_num_ops" : 0, "SplitTime_min" : 0, "SplitTime_max" : 0, "SplitTime_mean" : 0.0, "SplitTime_median" : 0.0, "SplitTime_75th_percentile" : 0.0, "SplitTime_95th_percentile" : 0.0, "SplitTime_99th_percentile" : 0.0 } ] } 2017-05-24 01:47:23,310 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] regionserver.HRegionServer: STOPPED: Failed log close in log roller 2017-05-24 01:47:23,310 INFO [regionserver/aps-hadoop6/hadoop6:16020.logRoller] regionserver.LogRoller: LogRoller exiting. 2017-05-24 01:47:23,310 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.SplitLogWorker: Sending interrupt to stop the worker thread 2017-05-24 01:47:23,312 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: Stopping infoServer 2017-05-24 01:47:23,312 INFO [SplitLogWorker-aps-hadoop6:16020] regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting. 2017-05-24 01:47:23,313 INFO [SplitLogWorker-aps-hadoop6:16020] regionserver.SplitLogWorker: SplitLogWorker aps-hadoop6,16020,1495538759899 exiting 2017-05-24 01:47:23,329 INFO [regionserver/aps-hadoop6/hadoop6:16020] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:16030 2017-05-24 01:47:23,430 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HeapMemoryManager: Stoping HeapMemoryTuner chore. 2017-05-24 01:47:23,430 INFO [regionserver/aps-hadoop6/hadoop6:16020] flush.RegionServerFlushTableProcedureManager: Stopping region server flush procedure manager abruptly. 2017-05-24 01:47:23,430 INFO [MemStoreFlusher.1] regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting 2017-05-24 01:47:23,430 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting 2017-05-24 01:47:23,430 INFO [regionserver/aps-hadoop6/hadoop6:16020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly. 2017-05-24 01:47:23,434 INFO [StoreCloserThread-HsearchIndexConfig,,1478428288064.c202098b27beefc3fb20a3d8fbbd4130.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,434 INFO [StoreCloserThread-UserAnalytics360UserReports,66666666,1478428457233.063d297f1b531d249ef34545ce71c266.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,435 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: aborting server aps-hadoop6,16020,1495538759899 2017-05-24 01:47:23,435 INFO [regionserver/aps-hadoop6/hadoop6:16020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x25887d92d0db4c5 2017-05-24 01:47:23,437 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed HsearchIndexConfig,,1478428288064.c202098b27beefc3fb20a3d8fbbd4130. 2017-05-24 01:47:23,437 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed UserAnalytics360UserReports,66666666,1478428457233.063d297f1b531d249ef34545ce71c266. 2017-05-24 01:47:23,437 INFO [StoreCloserThread-SocialMediaUserBehaviour,33333333,1478427797767.8fbbc679af9b7fb9dabc13a3ff3bc564.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,437 INFO [regionserver/aps-hadoop6/hadoop6:16020] zookeeper.ZooKeeper: Session: 0x25887d92d0db4c5 closed 2017-05-24 01:47:23,437 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed SocialMediaUserBehaviour,33333333,1478427797767.8fbbc679af9b7fb9dabc13a3ff3bc564. 2017-05-24 01:47:23,437 INFO [regionserver/aps-hadoop6/hadoop6:16020-EventThread] zookeeper.ClientCnxn: EventThread shut down 2017-05-24 01:47:23,437 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.CompactSplitThread: Waiting for Split Thread to finish... 2017-05-24 01:47:23,437 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish... 2017-05-24 01:47:23,437 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish... 2017-05-24 01:47:23,437 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.CompactSplitThread: Waiting for Small Compaction Thread to finish... 2017-05-24 01:47:23,438 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: Waiting on 112 regions to close 2017-05-24 01:47:23,484 INFO [StoreCloserThread-CampaignConversionTracking,99999999,1478427850748.64b363db5c8e9742c5dbf12eb70bccae.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,484 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignConversionTracking,99999999,1478427850748.64b363db5c8e9742c5dbf12eb70bccae. 2017-05-24 01:47:23,485 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,485 INFO [RS_CLOSE_META-aps-hadoop6:16020-0] regionserver.HRegion: Closed hbase:meta,,1.1588230740 2017-05-24 01:47:23,485 INFO [StoreCloserThread-ORMDetails,cccccccc,1484038154823.ea296e524e53a52cbdfec2354947abc4.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,485 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed ORMDetails,cccccccc,1484038154823.ea296e524e53a52cbdfec2354947abc4. 2017-05-24 01:47:23,485 INFO [StoreCloserThread-SocialMediaAnalyticsRecipients,,1478427863544.18fe1b7b6515c19926516abaffd3609d.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,486 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed SocialMediaAnalyticsRecipients,,1478427863544.18fe1b7b6515c19926516abaffd3609d. 2017-05-24 01:47:23,486 INFO [StoreCloserThread-WeekDayList,,1478427657453.43e2b6f462bdb4ed04f9a28a602a6d99.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,486 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed WeekDayList,,1478427657453.43e2b6f462bdb4ed04f9a28a602a6d99. 2017-05-24 01:47:23,498 INFO [StoreCloserThread-CampaignPerformance,33333333,1478428211809.c960d82b973a90d1cc2500ee727353a9.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,498 ERROR [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Memstore size is 551704 2017-05-24 01:47:23,498 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignPerformance,33333333,1478428211809.c960d82b973a90d1cc2500ee727353a9. 2017-05-24 01:47:23,500 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed Fb 2017-05-24 01:47:23,502 INFO [StoreCloserThread-test1,33333333,1480081852792.0fdb7e95b5bbaadb083dcc85a160230d.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,502 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed test1,33333333,1480081852792.0fdb7e95b5bbaadb083dcc85a160230d. 2017-05-24 01:47:23,505 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed Fp 2017-05-24 01:47:23,505 INFO [StoreCloserThread-ORMDetails,34nwuvp20384rs_3a1e6c98763edf73d0e5e6d0490342e9D11BlogsNeutralUnited States/Pakistanen20170313160000,1491671630778.bfd27f8f5aeeec02ceb12eb12a7b0917.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,505 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed ORMDetails,34nwuvp20384rs_3a1e6c98763edf73d0e5e6d0490342e9D11BlogsNeutralUnited States/Pakistanen20170313160000,1491671630778.bfd27f8f5aeeec02ceb12eb12a7b0917. 2017-05-24 01:47:23,507 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed Ga 2017-05-24 01:47:23,507 INFO [StoreCloserThread-CampaignOpenTransaction,66666666,1478427715271.29bde7ee3a177d3d75948c1e1d601397.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,507 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignOpenTransaction,66666666,1478427715271.29bde7ee3a177d3d75948c1e1d601397. 2017-05-24 01:47:23,508 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed Lnk 2017-05-24 01:47:23,509 INFO [StoreCloserThread-CampaignSMSTransaction,lozAcwnoUdDXKGD93201604141116,1479221891043.5f5c7930c81170df98b1a7c7a9204b1d.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,509 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSMSTransaction,lozAcwnoUdDXKGD93201604141116,1479221891043.5f5c7930c81170df98b1a7c7a9204b1d. 2017-05-24 01:47:23,509 INFO [StoreCloserThread-CampaignSummaryGeographyByCityMDC,66666666,1478428033615.b0b7ac203e84762d96860e6316094577.-1] regionserver.HStore: Closed Geo 2017-05-24 01:47:23,509 INFO [StoreCloserThread-CampaignSummaryFactsByEMailClient,,1478428266204.d97bdd9080dcacb0b74493b539d6a233.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,509 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSummaryFactsByEMailClient,,1478428266204.d97bdd9080dcacb0b74493b539d6a233. 2017-05-24 01:47:23,509 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed Ra 2017-05-24 01:47:23,510 INFO [StoreCloserThread-CampaignSummaryGeographyByCityMDC,66666666,1478428033615.b0b7ac203e84762d96860e6316094577.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,510 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSummaryGeographyByCityMDC,66666666,1478428033615.b0b7ac203e84762d96860e6316094577. 2017-05-24 01:47:23,511 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed Tw 2017-05-24 01:47:23,512 INFO [StoreCloserThread-WebAnalyticsUserFlow,cDGww3yRACityDelhi, India01494233669,1494631717830.0278795e1a44f1d7635117057f0aa944.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,512 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed WebAnalyticsUserFlow,cDGww3yRACityDelhi, India01494233669,1494631717830.0278795e1a44f1d7635117057f0aa944. 2017-05-24 01:47:23,512 INFO [StoreCloserThread-SMSCampaignStatus,66666666,1478428281833.7e13403c0f03f448de92ef604afc1ad3.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,512 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed SMSCampaignStatus,66666666,1478428281833.7e13403c0f03f448de92ef604afc1ad3. 2017-05-24 01:47:23,512 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed We 2017-05-24 01:47:23,512 INFO [StoreCloserThread-UserAnalytics360OverviewBehaviour,cccccccc,1478428297431.5461123b9dc0917d7996e3e6cac530ba.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,513 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed UserAnalytics360OverviewBehaviour,cccccccc,1478428297431.5461123b9dc0917d7996e3e6cac530ba. 2017-05-24 01:47:23,514 INFO [StoreCloserThread-EmailCampaignStatus,34nwOPJmHOPENH5SZD,1490095253794.ef24ef158ef0500081fb9e6eb46c3406.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,514 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed Yt 2017-05-24 01:47:23,514 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed EmailCampaignStatus,34nwOPJmHOPENH5SZD,1490095253794.ef24ef158ef0500081fb9e6eb46c3406. 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed Fb 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed Fp 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed Ga 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed Lnk 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignBlastScheduleTransaction,00zAcwyzKG7HNFKPM,1493319850990.186f5de6827df65054da9e1780efac9e.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,516 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignBlastScheduleTransaction,00zAcwyzKG7HNFKPM,1493319850990.186f5de6827df65054da9e1780efac9e. 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,516 ERROR [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Memstore size is 44152 2017-05-24 01:47:23,516 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignGoal,cccccccc,1478511303047.3097db2b5f88e47afafb31b638d68115. 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed Ra 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed Tw 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed We 2017-05-24 01:47:23,516 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed Yt 2017-05-24 01:47:23,517 INFO [StoreCloserThread-CampaignSummaryDemographics,cccccccc,1478427988763.0a32baa44cabc5a4da15c79d2df14dd6.-1] regionserver.HStore: Closed Dg 2017-05-24 01:47:23,519 INFO [StoreCloserThread-CampaignSummaryDemographics,cccccccc,1478427988763.0a32baa44cabc5a4da15c79d2df14dd6.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,519 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSummaryDemographics,cccccccc,1478427988763.0a32baa44cabc5a4da15c79d2df14dd6. 2017-05-24 01:47:23,519 INFO [StoreCloserThread-CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,519 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignGoal,66666666,1478511303047.afcb740574f84dcab0853c38fcf78bf7. 2017-05-24 01:47:23,520 INFO [StoreCloserThread-CampaignSMSTransaction,FEzAcwHIIE5YL38,1488452469954.315844264f598f452e11c8480364f385.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,520 INFO [StoreCloserThread-ORMOverview,33333333,1478428415029.31b981fd9587697dcd84a137948684a0.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,520 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSTransaction,FEzAcwHIIE5YL38,1488452469954.315844264f598f452e11c8480364f385. 2017-05-24 01:47:23,520 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed ORMOverview,33333333,1478428415029.31b981fd9587697dcd84a137948684a0. 2017-05-24 01:47:23,520 INFO [StoreCloserThread-CampaignQRCodeTransaction,,1478427783205.5fd164671a8ea2a8cb0825672d7a309e.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,520 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignQRCodeTransaction,,1478427783205.5fd164671a8ea2a8cb0825672d7a309e. 2017-05-24 01:47:23,521 INFO [StoreCloserThread-CampaignSMSLinkClicksTransaction,99999999,1478427776736.ccca78ef1968ab7ca24af3e065b07fdd.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,521 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSLinkClicksTransaction,99999999,1478427776736.ccca78ef1968ab7ca24af3e065b07fdd. 2017-05-24 01:47:23,521 INFO [StoreCloserThread-CampaignMailBounce,cccccccc,1489993661185.32f29a986ca0d5c87e0b9bef07b386a7.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,521 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignMailBounce,cccccccc,1489993661185.32f29a986ca0d5c87e0b9bef07b386a7. 2017-05-24 01:47:23,521 INFO [StoreCloserThread-CampaignBlastScheduleTransaction,99999999,1494285503142.12f3a4bbb6804e99732c49f0dfbb13a9.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,521 INFO [StoreCloserThread-WorkflowJobStatus,,1478428303673.ceacf0e3688123577a1ae4af06f20a24.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,521 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignBlastScheduleTransaction,99999999,1494285503142.12f3a4bbb6804e99732c49f0dfbb13a9. 2017-05-24 01:47:23,521 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed WorkflowJobStatus,,1478428303673.ceacf0e3688123577a1ae4af06f20a24. 2017-05-24 01:47:23,522 INFO [StoreCloserThread-ORMDetails,GkMwabF20152rs_36cf7210f7870898b99a6b9e436f30deD24newsNeutralIndiaen20170202050000,1488445255139.c1cee390701043ab50d1a69cadb6dc35.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,522 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed ORMDetails,GkMwabF20152rs_36cf7210f7870898b99a6b9e436f30deD24newsNeutralIndiaen20170202050000,1488445255139.c1cee390701043ab50d1a69cadb6dc35. 2017-05-24 01:47:23,522 INFO [StoreCloserThread-CampaignSummaryGeographyByCity,cccccccc,1478428019768.c8bd20a43a3df953cbb47ce31f18cb8f.-1] regionserver.HStore: Closed Geo 2017-05-24 01:47:23,522 INFO [StoreCloserThread-CampaignSummaryGeographyByCity,cccccccc,1478428019768.c8bd20a43a3df953cbb47ce31f18cb8f.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,522 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSummaryGeographyByCity,cccccccc,1478428019768.c8bd20a43a3df953cbb47ce31f18cb8f. 2017-05-24 01:47:23,522 INFO [StoreCloserThread-test2,99999999,1491991935987.b7f62e07c0b8e6d545ffa40826776e66.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,522 INFO [StoreCloserThread-test1,99999999,1480081852792.8544509a68f97e5718322d6dbac29189.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,522 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed test2,99999999,1491991935987.b7f62e07c0b8e6d545ffa40826776e66. 2017-05-24 01:47:23,522 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed test1,99999999,1480081852792.8544509a68f97e5718322d6dbac29189. 2017-05-24 01:47:23,522 INFO [StoreCloserThread-UserAnalytics360UserCampaignReports,33333333,1478428512513.53f124970866e57b9b3e44dc2b3c52d6.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,522 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed UserAnalytics360UserCampaignReports,33333333,1478428512513.53f124970866e57b9b3e44dc2b3c52d6. 2017-05-24 01:47:23,523 INFO [StoreCloserThread-Recipients,zAcwGNW6Q,1482409867665.0eeb9b55ab67fb9da32253cc26ec21bf.-1] regionserver.HStore: Closed Prop 2017-05-24 01:47:23,523 INFO [StoreCloserThread-Recipients,zAcwGNW6Q,1482409867665.0eeb9b55ab67fb9da32253cc26ec21bf.-1] regionserver.HStore: Closed Seg 2017-05-24 01:47:23,523 INFO [StoreCloserThread-ORMDetails,34nwuvp20384rs_7c4e3863f51f34bfc388460aa8b0a2a5D17DISCUSSIONNegativeen20170319084000,1490786472681.d2a6a48fe67fd4ad858828910d32b700.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,523 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed ORMDetails,34nwuvp20384rs_7c4e3863f51f34bfc388460aa8b0a2a5D17DISCUSSIONNegativeen20170319084000,1490786472681.d2a6a48fe67fd4ad858828910d32b700. 2017-05-24 01:47:23,523 INFO [StoreCloserThread-EmailCampaignStatus,zAcwKWbYoBLAST9PD7S,1494848774530.e5bb79f28b75c794cfcbfca9e08507cd.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,523 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed EmailCampaignStatus,zAcwKWbYoBLAST9PD7S,1494848774530.e5bb79f28b75c794cfcbfca9e08507cd. 2017-05-24 01:47:23,523 INFO [StoreCloserThread-ModelsConfigSync,33333333,1478428291186.a3031397845bbbf714f6c4f2309c3f4e.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,523 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed ModelsConfigSync,33333333,1478428291186.a3031397845bbbf714f6c4f2309c3f4e. 2017-05-24 01:47:23,524 INFO [StoreCloserThread-Recipients,zAcwGNW6Q,1482409867665.0eeb9b55ab67fb9da32253cc26ec21bf.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,524 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed Recipients,zAcwGNW6Q,1482409867665.0eeb9b55ab67fb9da32253cc26ec21bf. 2017-05-24 01:47:23,524 INFO [StoreCloserThread-CampaignROIForecasting,33333333,1478428263066.2c5b1325855e4fc971747f217bdd7621.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,524 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignROIForecasting,33333333,1478428263066.2c5b1325855e4fc971747f217bdd7621. 2017-05-24 01:47:23,524 INFO [StoreCloserThread-test1,66666666,1480081852792.82637341f05b9423dd96918839fcf897.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,524 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed test1,66666666,1480081852792.82637341f05b9423dd96918839fcf897. 2017-05-24 01:47:23,524 INFO [StoreCloserThread-CampaignSummaryIndustrySegment,,1478428096368.bb225673e1cdda4e705d3f144b490d34.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,524 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSummaryIndustrySegment,,1478428096368.bb225673e1cdda4e705d3f144b490d34. 2017-05-24 01:47:23,524 INFO [StoreCloserThread-CampaignSummaryTrends_New,66666666,1478427970677.ae8b0e9341b11f06b65dd20f4ef829b3.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,524 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSummaryTrends_New,66666666,1478427970677.ae8b0e9341b11f06b65dd20f4ef829b3. 2017-05-24 01:47:23,525 INFO [StoreCloserThread-CampaignSMSTransaction_testing,66666666,1493727222020.3c215ba34acfbfe9966c8192c4e7a320.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,525 INFO [StoreCloserThread-CampaignSMSTransaction,79zAcwQ9RYbYbS4K24,1479218455453.cebb335faa0bd5715303e42396b4826b.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,525 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSMSTransaction_testing,66666666,1493727222020.3c215ba34acfbfe9966c8192c4e7a320. 2017-05-24 01:47:23,525 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSTransaction,79zAcwQ9RYbYbS4K24,1479218455453.cebb335faa0bd5715303e42396b4826b. 2017-05-24 01:47:23,525 INFO [StoreCloserThread-DashboardAverageTimeToConversion,99999999,1478428382583.d7dc9782440ebd333405e3432f28fe08.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,525 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed DashboardAverageTimeToConversion,99999999,1478428382583.d7dc9782440ebd333405e3432f28fe08. 2017-05-24 01:47:23,525 INFO [StoreCloserThread-RecommendationWeekday,,1478428318168.7b9a3545d56322c093e2a8251076f3d2.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,525 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed RecommendationWeekday,,1478428318168.7b9a3545d56322c093e2a8251076f3d2. 2017-05-24 01:47:23,525 INFO [StoreCloserThread-CampaignSMSTransaction_Team,,1495174795514.8175be394cb0680779690b02d8103ec6.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,525 INFO [StoreCloserThread-CampaignSMSLinkClicksTransaction,00zAcwtz9CXFPE7B20151108021038,1484297209575.67a50925e778a529c669360f22cf9cfa.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,525 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSMSTransaction_Team,,1495174795514.8175be394cb0680779690b02d8103ec6. 2017-05-24 01:47:23,525 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSMSLinkClicksTransaction,00zAcwtz9CXFPE7B20151108021038,1484297209575.67a50925e778a529c669360f22cf9cfa. 2017-05-24 01:47:23,525 INFO [StoreCloserThread-CampaignSummaryImpact,,1478427995538.fed59b8c4fe374f3ca104c505b2586c5.-1] regionserver.HStore: Closed Dg 2017-05-24 01:47:23,525 INFO [StoreCloserThread-CampaignSummaryImpact,,1478427995538.fed59b8c4fe374f3ca104c505b2586c5.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,525 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSummaryImpact,,1478427995538.fed59b8c4fe374f3ca104c505b2586c5. 2017-05-24 01:47:23,526 INFO [StoreCloserThread-Recipients,zAcwKJGYT,1482409867665.3c27b53f7c22e710a1f5ae3b9211c4e4.-1] regionserver.HStore: Closed Prop 2017-05-24 01:47:23,526 INFO [StoreCloserThread-Recipients,zAcwKJGYT,1482409867665.3c27b53f7c22e710a1f5ae3b9211c4e4.-1] regionserver.HStore: Closed Seg 2017-05-24 01:47:23,526 INFO [StoreCloserThread-test2,33333333,1491991935987.d5cb02658d778a9b8cdda78cedacb593.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,526 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed test2,33333333,1491991935987.d5cb02658d778a9b8cdda78cedacb593. 2017-05-24 01:47:23,526 INFO [StoreCloserThread-CampaignSummaryDetailGeographyByCityMDC,,1478428063068.98d2323969d12a93fce4c821189d0b2e.-1] regionserver.HStore: Closed Geo 2017-05-24 01:47:23,526 INFO [StoreCloserThread-CampaignSummaryTrends_New,,1478427970677.210f9d9597d8731ccea5a83c8f5a2d9f.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,526 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSummaryTrends_New,,1478427970677.210f9d9597d8731ccea5a83c8f5a2d9f. 2017-05-24 01:47:23,526 INFO [StoreCloserThread-Recipients,zAcwKJGYT,1482409867665.3c27b53f7c22e710a1f5ae3b9211c4e4.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,526 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed Recipients,zAcwKJGYT,1482409867665.3c27b53f7c22e710a1f5ae3b9211c4e4. 2017-05-24 01:47:23,527 INFO [StoreCloserThread-UserAnalytics360ViralityMap,cccccccc,1478428479264.1e30bf052601f8a0d20809cb289bf5d3.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,527 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed UserAnalytics360ViralityMap,cccccccc,1478428479264.1e30bf052601f8a0d20809cb289bf5d3. 2017-05-24 01:47:23,527 INFO [StoreCloserThread-UserAnalytics360UserInfo,66666666,1478428466019.9e8fb611891a1bed53a904af28987dc2.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,527 INFO [StoreCloserThread-CampaignSummaryDetailGeographyByCityMDC,,1478428063068.98d2323969d12a93fce4c821189d0b2e.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,527 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed UserAnalytics360UserInfo,66666666,1478428466019.9e8fb611891a1bed53a904af28987dc2. 2017-05-24 01:47:23,527 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSummaryDetailGeographyByCityMDC,,1478428063068.98d2323969d12a93fce4c821189d0b2e. 2017-05-24 01:47:23,527 INFO [StoreCloserThread-DashboardAverageTimeToConversion,cccccccc,1478428382583.ac43db8941a81b2cb14aec4ab7bd4943.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,527 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed DashboardAverageTimeToConversion,cccccccc,1478428382583.ac43db8941a81b2cb14aec4ab7bd4943. 2017-05-24 01:47:23,527 INFO [StoreCloserThread-Recipients,8dWwFbkQg,1478768079675.516231f6d24c5d09c46e1b9bfe188648.-1] regionserver.HStore: Closed Prop 2017-05-24 01:47:23,527 INFO [StoreCloserThread-Recipients,8dWwFbkQg,1478768079675.516231f6d24c5d09c46e1b9bfe188648.-1] regionserver.HStore: Closed Seg 2017-05-24 01:47:23,528 INFO [StoreCloserThread-WebAnalyticsUserFlow,cccccccc,1494598376307.ded232c57e472e24c9d63b87016dc7e8.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,528 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed WebAnalyticsUserFlow,cccccccc,1494598376307.ded232c57e472e24c9d63b87016dc7e8. 2017-05-24 01:47:23,528 INFO [StoreCloserThread-Recipients,8dWwFbkQg,1478768079675.516231f6d24c5d09c46e1b9bfe188648.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,528 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed Recipients,8dWwFbkQg,1478768079675.516231f6d24c5d09c46e1b9bfe188648. 2017-05-24 01:47:23,528 INFO [StoreCloserThread-UserAnalytics360UserInfo,99999999,1478428466019.3eb7ac856d36dfe482da50f5bb9045f1.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,528 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed UserAnalytics360UserInfo,99999999,1478428466019.3eb7ac856d36dfe482da50f5bb9045f1. 2017-05-24 01:47:23,528 INFO [StoreCloserThread-AverageCampaignPerformance,33333333,1478428256763.573482bf2396e92f7ea555322e355f0c.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,528 INFO [StoreCloserThread-UserAnalytics360RecipientCampaign,cccccccc,1478428494703.1967dfcfef3faeebc6d9fd7d05747f5f.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,528 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed AverageCampaignPerformance,33333333,1478428256763.573482bf2396e92f7ea555322e355f0c. 2017-05-24 01:47:23,528 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed UserAnalytics360RecipientCampaign,cccccccc,1478428494703.1967dfcfef3faeebc6d9fd7d05747f5f. 2017-05-24 01:47:23,528 INFO [StoreCloserThread-DashboardChannelWisePerformance,cccccccc,1478428370167.19b277189fb674a58df32f971ddf60e1.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,529 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed DashboardChannelWisePerformance,cccccccc,1478428370167.19b277189fb674a58df32f971ddf60e1. 2017-05-24 01:47:23,529 INFO [StoreCloserThread-UserAnalytics360ViralityMap,66666666,1478428479264.8a3aa596cc4554206f322dd7767b5a57.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,529 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed UserAnalytics360ViralityMap,66666666,1478428479264.8a3aa596cc4554206f322dd7767b5a57. 2017-05-24 01:47:23,529 INFO [StoreCloserThread-ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,529 ERROR [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Memstore size is 116039712 2017-05-24 01:47:23,529 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed ORMDetails,GkMwabF20152rs_c150f3f5d4a239a5ee3593654e1a6c16D13newsNegativeen20170122224825,1493286144460.96ab0ca6ba008280cd0cc841abeeb413. 2017-05-24 01:47:23,529 INFO [StoreCloserThread-hbase:namespace,,1478354778901.6201d4326a15a970e51e0faf87830fbd.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,529 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed hbase:namespace,,1478354778901.6201d4326a15a970e51e0faf87830fbd. 2017-05-24 01:47:23,530 INFO [StoreCloserThread-CampaignMailBounce,66666666,1478427752439.33c84cd6be966a19e8a97e553182e2f6.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,530 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignMailBounce,66666666,1478427752439.33c84cd6be966a19e8a97e553182e2f6. 2017-05-24 01:47:23,530 INFO [StoreCloserThread-RecommendationToleranceValues,99999999,1478428275574.145b1836930567e057b99515086d871d.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,530 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed RecommendationToleranceValues,99999999,1478428275574.145b1836930567e057b99515086d871d. 2017-05-24 01:47:23,530 INFO [StoreCloserThread-SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,530 ERROR [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Memstore size is 472688 2017-05-24 01:47:23,530 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed SocialMediaAnalyticsRecipients,99999999,1478427863544.8111e9fc6c837bbac6a95ddea20101ad. 2017-05-24 01:47:23,530 INFO [StoreCloserThread-CampaignConversionTracking,66666666,1478427850748.fd9b0a1c14e5d8e49156ab69e1ac470d.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,530 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignConversionTracking,66666666,1478427850748.fd9b0a1c14e5d8e49156ab69e1ac470d. 2017-05-24 01:47:23,530 INFO [StoreCloserThread-CampaignSummaryIndustrySegment,66666666,1478428096368.6289880f7376fb3585afed107e8fb3ec.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,530 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSummaryIndustrySegment,66666666,1478428096368.6289880f7376fb3585afed107e8fb3ec. 2017-05-24 01:47:23,531 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Em 2017-05-24 01:47:23,531 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Fb 2017-05-24 01:47:23,531 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Fp 2017-05-24 01:47:23,531 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed GP 2017-05-24 01:47:23,531 INFO [StoreCloserThread-Tenants,66666666,1478428272455.26760264b54bda1758a0bc3dc4c2cc8a.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,531 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed Tenants,66666666,1478428272455.26760264b54bda1758a0bc3dc4c2cc8a. 2017-05-24 01:47:23,531 INFO [StoreCloserThread-CampaignSummaryDetailGeographyByCity,66666666,1478428044795.4fbff9f2371e00c26137f17c526c653c.-1] regionserver.HStore: Closed Geo 2017-05-24 01:47:23,531 INFO [StoreCloserThread-CampaignSummaryDetailGeographyByCity,66666666,1478428044795.4fbff9f2371e00c26137f17c526c653c.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,531 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSummaryDetailGeographyByCity,66666666,1478428044795.4fbff9f2371e00c26137f17c526c653c. 2017-05-24 01:47:23,532 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Mob 2017-05-24 01:47:23,532 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Pi 2017-05-24 01:47:23,533 INFO [StoreCloserThread-CampaignFacts,66666666,1478427883316.67ad226a0d0707995914b55e2ab16457.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,533 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignFacts,66666666,1478427883316.67ad226a0d0707995914b55e2ab16457. 2017-05-24 01:47:23,533 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Qr 2017-05-24 01:47:23,533 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Tw 2017-05-24 01:47:23,533 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed Yt 2017-05-24 01:47:23,533 INFO [StoreCloserThread-CampaignBlastScheduleTransaction,zAcwPQf2dSCVN8,1488896783160.9f20573bd4da99c6f0735b0354da41be.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,533 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignBlastScheduleTransaction,zAcwPQf2dSCVN8,1488896783160.9f20573bd4da99c6f0735b0354da41be. 2017-05-24 01:47:23,533 INFO [StoreCloserThread-CampaignSMSTransaction,fizAcwHI7vJHFT7L201603030005,1479221891043.f856bd44e788a40bd24dbfdbb18a44ec.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,534 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSMSTransaction,fizAcwHI7vJHFT7L201603030005,1479221891043.f856bd44e788a40bd24dbfdbb18a44ec. 2017-05-24 01:47:23,534 INFO [StoreCloserThread-CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,534 ERROR [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Memstore size is 31824 2017-05-24 01:47:23,534 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSummaryFactsMDC,99999999,1478427948111.fcf0380fd986f7c3d3f3bf298e91e504. 2017-05-24 01:47:23,534 INFO [StoreCloserThread-EmailCampaignStatus,zAcwIqvDVOPENCPUG,1494848774530.7ffad7a36bf1fd629f7ef73d70fce1fd.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,534 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed EmailCampaignStatus,zAcwIqvDVOPENCPUG,1494848774530.7ffad7a36bf1fd629f7ef73d70fce1fd. 2017-05-24 01:47:23,534 INFO [StoreCloserThread-CampaignMailUnsubscribe,cccccccc,1478427744280.3e109e0edd02acff884ff6a4fd21572c.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,534 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignMailUnsubscribe,cccccccc,1478427744280.3e109e0edd02acff884ff6a4fd21572c. 2017-05-24 01:47:23,534 INFO [StoreCloserThread-CampaignSummarySnapshot,,1478428220782.8b99b2fdfdc04881aa1840bcdf61e61c.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,535 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed CampaignSummarySnapshot,,1478428220782.8b99b2fdfdc04881aa1840bcdf61e61c. 2017-05-24 01:47:23,535 INFO [StoreCloserThread-CampaignSMSLinkClicksTransaction,,1481276951756.90aac44a072294a0f0c1083dc7aee2ca.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,535 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Em 2017-05-24 01:47:23,535 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Fb 2017-05-24 01:47:23,535 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Fp 2017-05-24 01:47:23,535 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Ga 2017-05-24 01:47:23,535 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Lnk 2017-05-24 01:47:23,535 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSMSLinkClicksTransaction,,1481276951756.90aac44a072294a0f0c1083dc7aee2ca. 2017-05-24 01:47:23,535 INFO [StoreCloserThread-UserAnalytics360UserReports,33333333,1478428457233.cdb2abc48692634b7bd8f08011c0b7bb.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,536 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed UserAnalytics360UserReports,33333333,1478428457233.cdb2abc48692634b7bd8f08011c0b7bb. 2017-05-24 01:47:23,536 INFO [StoreCloserThread-IndustryBenchMark,cccccccc,1478428278697.110cbdd3996f4937bfaa7e442abefc66.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,536 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed IndustryBenchMark,cccccccc,1478428278697.110cbdd3996f4937bfaa7e442abefc66. 2017-05-24 01:47:23,536 INFO [StoreCloserThread-SocialMediaInsightsDetail_New_bkp,33333333,1493112706395.73bc1e0573f1fe3a9a955ff06145213e.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,536 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed SocialMediaInsightsDetail_New_bkp,33333333,1493112706395.73bc1e0573f1fe3a9a955ff06145213e. 2017-05-24 01:47:23,536 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Mb 2017-05-24 01:47:23,536 INFO [StoreCloserThread-UserAnalytics360ViralityMap,,1478428479264.b979f32d86456c3f54e225892c9108d0.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,536 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed UserAnalytics360ViralityMap,,1478428479264.b979f32d86456c3f54e225892c9108d0. 2017-05-24 01:47:23,537 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Qr 2017-05-24 01:47:23,537 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Ra 2017-05-24 01:47:23,537 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Tw 2017-05-24 01:47:23,537 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed We 2017-05-24 01:47:23,537 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed Yt 2017-05-24 01:47:23,537 INFO [StoreCloserThread-SocialMediaInsightsDetail_New_Bkp160517,66666666,1478427825373.df5197dc9a7677bb7fdc7a7738363b2b.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,537 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed SocialMediaInsightsDetail_New_Bkp160517,66666666,1478427825373.df5197dc9a7677bb7fdc7a7738363b2b. 2017-05-24 01:47:23,538 INFO [StoreCloserThread-CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,538 INFO [StoreCloserThread-CampaignSMSLinkClicksTransaction,66666666,1478427776736.6a3fe1509ce1ce80f92ca023614ed508.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,538 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignGoalMDC,66666666,1478427913612.d203ee0385f99a4648972742c3debddb. 2017-05-24 01:47:23,538 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSMSLinkClicksTransaction,66666666,1478427776736.6a3fe1509ce1ce80f92ca023614ed508. 2017-05-24 01:47:23,538 INFO [StoreCloserThread-SocialMediaUserBehaviour,66666666,1478427797767.8f8da615dc09c5e396aae602320a75d7.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,538 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed SocialMediaUserBehaviour,66666666,1478427797767.8f8da615dc09c5e396aae602320a75d7. 2017-05-24 01:47:23,539 INFO [StoreCloserThread-ORMDetails,99999999,1488445255139.6e08124dd36cef22a5b5d3bd1ebf324a.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,539 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed ORMDetails,99999999,1488445255139.6e08124dd36cef22a5b5d3bd1ebf324a. 2017-05-24 01:47:23,539 INFO [StoreCloserThread-CampaignBlastScheduleTransaction,srzAcwp3D5XFFASRV,1492484474153.18190d106607299d031ceadd8cf40034.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,539 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignBlastScheduleTransaction,srzAcwp3D5XFFASRV,1492484474153.18190d106607299d031ceadd8cf40034. 2017-05-24 01:47:23,540 INFO [StoreCloserThread-Campaigns,99999999,1478354871976.dc140556f6226019abb5289413e67175.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,540 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed Campaigns,99999999,1478354871976.dc140556f6226019abb5289413e67175. 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignSMSTransaction,FEzAcw4dATiS95HA201702251800,1490095737415.8f0a2d3fafd021b853a5f90c08b4c41a.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,540 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSTransaction,FEzAcw4dATiS95HA201702251800,1490095737415.8f0a2d3fafd021b853a5f90c08b4c41a. 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed Em 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed Fb 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed Fp 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed Lnk 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed Mob 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed QR 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed Tw 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed Yt 2017-05-24 01:47:23,540 INFO [StoreCloserThread-CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,540 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignBenchMark,cccccccc,1478428131332.4dd348bfccd5d2f1d7a29838780cc80d. 2017-05-24 01:47:23,540 INFO [StoreCloserThread-Tenants,33333333,1478428272455.29188a1d676cd5a2afa105408c09738c.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,540 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed Tenants,33333333,1478428272455.29188a1d676cd5a2afa105408c09738c. 2017-05-24 01:47:23,541 INFO [StoreCloserThread-CampaignSMSTransaction,FEzAcwRSK6tNP72X201702031643,1490792676052.32369ddd4ed995d60876449918cdb6dd.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,541 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSTransaction,FEzAcwRSK6tNP72X201702031643,1490792676052.32369ddd4ed995d60876449918cdb6dd. 2017-05-24 01:47:23,541 INFO [StoreCloserThread-CampaignMailBounce,zAcwItvKPQEGUS,1489815449837.fda0883cc7625ec72dc054ab51443878.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,541 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignMailBounce,zAcwItvKPQEGUS,1489815449837.fda0883cc7625ec72dc054ab51443878. 2017-05-24 01:47:23,542 INFO [StoreCloserThread-CampaignConversionTracking,33333333,1478427850748.3062e97bcbc5822354adf849ba057764.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,542 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignConversionTracking,33333333,1478427850748.3062e97bcbc5822354adf849ba057764. 2017-05-24 01:47:23,542 INFO [StoreCloserThread-CampaignMailSpam,99999999,1478427760210.55f297f34f5af3c515867a18a4bf7904.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,542 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignMailSpam,99999999,1478427760210.55f297f34f5af3c515867a18a4bf7904. 2017-05-24 01:47:23,542 INFO [StoreCloserThread-UserAnalytics360Details,99999999,1478428500855.0ca3c303e54bc6abb52eed0ef8f5c4d5.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,542 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed UserAnalytics360Details,99999999,1478428500855.0ca3c303e54bc6abb52eed0ef8f5c4d5. 2017-05-24 01:47:23,543 INFO [StoreCloserThread-CampaignSummaryImpact,cccccccc,1478427995538.cbe227556c858a5507939b264173c203.-1] regionserver.HStore: Closed Dg 2017-05-24 01:47:23,543 INFO [StoreCloserThread-CampaignSummaryImpact,cccccccc,1478427995538.cbe227556c858a5507939b264173c203.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,543 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSummaryImpact,cccccccc,1478427995538.cbe227556c858a5507939b264173c203. 2017-05-24 01:47:23,543 INFO [StoreCloserThread-CampaignMailBounce,zAcwSTE3LAD6420150825071313,1489815449837.07f8e516bcef00b0068fd4d53b4d0919.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,543 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignMailBounce,zAcwSTE3LAD6420150825071313,1489815449837.07f8e516bcef00b0068fd4d53b4d0919. 2017-05-24 01:47:23,543 INFO [StoreCloserThread-BusinessUnits,,1478428259919.d642351b06e5e4479917ae536a49759f.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,543 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed BusinessUnits,,1478428259919.d642351b06e5e4479917ae536a49759f. 2017-05-24 01:47:23,544 INFO [StoreCloserThread-AverageCampaignPerformance,cccccccc,1478428256763.0e45b3d62a15c443e71563ba6e8e5633.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,545 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed AverageCampaignPerformance,cccccccc,1478428256763.0e45b3d62a15c443e71563ba6e8e5633. 2017-05-24 01:47:23,545 INFO [StoreCloserThread-ModelsConfigSync,cccccccc,1478428291186.fec4c70f1e27c5bde24a02183abaf346.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,545 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed ModelsConfigSync,cccccccc,1478428291186.fec4c70f1e27c5bde24a02183abaf346. 2017-05-24 01:47:23,546 INFO [StoreCloserThread-CampaignSummaryTrends_New,99999999,1478427970677.fea5fdce86225406c7dd0dd4c7e51a38.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,546 ERROR [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Memstore size is 35364176 2017-05-24 01:47:23,546 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignSummaryTrends_New,99999999,1478427970677.fea5fdce86225406c7dd0dd4c7e51a38. 2017-05-24 01:47:23,546 INFO [StoreCloserThread-CampaignBlastScheduleTransaction,MHzAcwtXow7ZT73N,1494285503142.d5c4ae495396cdd5b0d3803646814839.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,546 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignBlastScheduleTransaction,MHzAcwtXow7ZT73N,1494285503142.d5c4ae495396cdd5b0d3803646814839. 2017-05-24 01:47:23,547 INFO [StoreCloserThread-CampaignSMSTransaction,FEzAcwwFF53494P8,1488650782804.6a20a7af0fab95f556bae635ec6be6b3.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,547 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSTransaction,FEzAcwwFF53494P8,1488650782804.6a20a7af0fab95f556bae635ec6be6b3. 2017-05-24 01:47:23,547 INFO [StoreCloserThread-CampaignQRCodeTransaction,66666666,1478427783205.9b034aa229d65dbc0de1077fcbd53dc5.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,547 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignQRCodeTransaction,66666666,1478427783205.9b034aa229d65dbc0de1077fcbd53dc5. 2017-05-24 01:47:23,547 INFO [StoreCloserThread-CampaignSMSLinkClicksTransaction,lozAcwnoUBP3W6QdD20160416055345,1484152698433.bb9bcccf62b3e152db1aa34f52fbc7ed.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,547 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSLinkClicksTransaction,lozAcwnoUBP3W6QdD20160416055345,1484152698433.bb9bcccf62b3e152db1aa34f52fbc7ed. 2017-05-24 01:47:23,549 INFO [StoreCloserThread-WebAnalyticsUserFlow,99999999,1495429931220.08ee69f577c9f5e759086b00d4ec854b.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,549 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed WebAnalyticsUserFlow,99999999,1495429931220.08ee69f577c9f5e759086b00d4ec854b. 2017-05-24 01:47:23,549 INFO [StoreCloserThread-WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,549 ERROR [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Memstore size is 168182736 2017-05-24 01:47:23,549 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed WebAnalyticsUserFlow,cDGw45zRACountryin11494415496,1495429931220.64c90a0262dfa861044618e31604c3e3. 2017-05-24 01:47:23,550 INFO [StoreCloserThread-CampaignSMSTransaction,00zAcwklUHaVSFYC6201508270717,1479280046343.061c7096dc96a669709456d9e1496c5e.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,550 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-2] regionserver.HRegion: Closed CampaignSMSTransaction,00zAcwklUHaVSFYC6201508270717,1479280046343.061c7096dc96a669709456d9e1496c5e. 2017-05-24 01:47:23,550 INFO [StoreCloserThread-CampaignBlastScheduleTransactionCount,,1478428104476.2dbe890dc78d7e2041c9878543f40e70.-1] regionserver.HStore: Closed info 2017-05-24 01:47:23,550 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-0] regionserver.HRegion: Closed CampaignBlastScheduleTransactionCount,,1478428104476.2dbe890dc78d7e2041c9878543f40e70. 2017-05-24 01:47:23,863 INFO [regionserver/aps-hadoop6/hadoop6:16020.leaseChecker] regionserver.Leases: regionserver/aps-hadoop6/hadoop6:16020.leaseChecker closing leases 2017-05-24 01:47:23,863 INFO [regionserver/aps-hadoop6/hadoop6:16020.leaseChecker] regionserver.Leases: regionserver/aps-hadoop6/hadoop6:16020.leaseChecker closed leases 2017-05-24 01:47:24,638 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: Waiting on 1 regions to close 2017-05-24 01:47:24,983 INFO [StoreCloserThread-SMSCampaignStatus,zAcwXPK7tBUTV2,1494704681403.13f0709e1757c082c1d370b61d4c2264.-1] regionserver.HStore: Closed info 2017-05-24 01:47:24,983 INFO [RS_CLOSE_REGION-aps-hadoop6:16020-1] regionserver.HRegion: Closed SMSCampaignStatus,zAcwXPK7tBUTV2,1494704681403.13f0709e1757c082c1d370b61d4c2264. 2017-05-24 01:47:25,039 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: stopping server aps-hadoop6,16020,1495538759899; all regions closed. 2017-05-24 01:47:30,808 INFO [RS_OPEN_META-aps-hadoop6:16020-0-MetaLogRoller] regionserver.LogRoller: LogRoller exiting. 2017-05-24 01:47:31,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer$PeriodicMemstoreFlusher: Chore: aps-hadoop6,16020,1495538759899-MemstoreFlusherChore was stopped 2017-05-24 01:48:01,827 INFO [aps-hadoop6,16020,1495538759899_ChoreService_1] regionserver.HRegionServer$MovedRegionsCleaner: Chore: MovedRegionsCleaner for region aps-hadoop6,16020,1495538759899 was stopped 2017-05-24 01:48:07,377 WARN [ResponseProcessor for block BP-1810172115-hadoop2-1478343078462:blk_1080771232_7055736] hdfs.DFSClient: Slow ReadProcessor read fields took 42334ms (threshold=30000ms); ack: seqno: 2 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 42333082807 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[hadoop6:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-50fbd188-eca8-404b-9a7a-a85a07a1a66b,DISK], DatanodeInfoWithStorage[hadoop4:50010,DS-9f6829c5-3834-436e-8ec2-39df06418ca4,DISK]] 2017-05-24 01:48:07,377 WARN [regionserver/aps-hadoop6/hadoop6:16020] hdfs.DFSClient: Slow waitForAckedSeqno took 42336ms (threshold=30000ms) 2017-05-24 01:48:07,407 WARN [regionserver/aps-hadoop6/hadoop6:16020] wal.ProtobufLogWriter: Failed to write trailer, non-fatal, continuing... java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:947) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1021) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1189) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:48:07,408 ERROR [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: Shutdown / close of WAL failed: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]], original=[DatanodeInfoWithStorage[hadoop7:50010,DS-9a10c707-ebd5-4fae-8f82-b381f706fa57,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. 2017-05-24 01:48:07,419 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.Leases: regionserver/aps-hadoop6/hadoop6:16020 closing leases 2017-05-24 01:48:07,419 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.Leases: regionserver/aps-hadoop6/hadoop6:16020 closed leases 2017-05-24 01:48:07,419 INFO [regionserver/aps-hadoop6/hadoop6:16020] hbase.ChoreService: Chore service for: aps-hadoop6,16020,1495538759899 had [] on shutdown 2017-05-24 01:48:07,430 INFO [regionserver/aps-hadoop6/hadoop6:16020] ipc.RpcServer: Stopping server on 16020 2017-05-24 01:48:07,430 INFO [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: stopping 2017-05-24 01:48:07,431 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped 2017-05-24 01:48:07,431 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 2017-05-24 01:48:07,451 INFO [regionserver/aps-hadoop6/hadoop6:16020] zookeeper.ZooKeeper: Session: 0x15887d92a89b8d6 closed 2017-05-24 01:48:07,451 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: stopping server aps-hadoop6,16020,1495538759899; zookeeper connection closed. 2017-05-24 01:48:07,451 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2017-05-24 01:48:07,451 INFO [regionserver/aps-hadoop6/hadoop6:16020] regionserver.HRegionServer: regionserver/aps-hadoop6/hadoop6:16020 exiting 2017-05-24 01:48:07,451 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting java.lang.RuntimeException: HRegionServer Aborted at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2655) 2017-05-24 01:48:07,454 INFO [pool-2-thread-1] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@7a11c4c7 2017-05-24 01:48:07,454 INFO [pool-2-thread-1] regionserver.ShutdownHook: Starting fs shutdown hook thread. 2017-05-24 01:48:17,454 WARN [pool-2-thread-1] util.Threads: Thread-3581; joinwait=30000 java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1253) at org.apache.hadoop.hbase.util.Threads.shutdown(Threads.java:111) at org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread.run(ShutdownHook.java:124) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-05-24 01:48:17,455 WARN [Thread-8] util.ShutdownHookManager: ShutdownHook 'ShutdownHookThread' timeout, java.util.concurrent.TimeoutException java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67) 2017-05-24 01:48:19,427 WARN [ResponseProcessor for block BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352 java.io.IOException: Bad response ERROR for block BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352 from datanode DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:785) 2017-05-24 01:48:19,427 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590437010 block BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352] hdfs.DFSClient: Error Recovery for block BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352 in pipeline DatanodeInfoWithStorage[hadoop6:50010,DS-08f50d3d-2a14-4033-b117-b162b0cae2ce,DISK], DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK], DatanodeInfoWithStorage[hadoop1:50010,DS-5f5eea04-33fc-449d-8bb6-d372303a99c7,DISK]: bad datanode DatanodeInfoWithStorage[hadoop5:50010,DS-6a874575-5e2a-4b1d-8914-c70e465dba0e,DISK] 2017-05-24 01:48:19,430 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590437010 block BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352] retry.RetryInvocationHandler: Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline over aps-hadoop2/hadoop2:8020. Not retrying because try once and fail. org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352 does not exist or is not under Constructionblk_1080771808_7056352{UCState=UNDER_RECOVERY, truncateBlock=null, primaryNodeIndex=0, replicas=[ReplicaUC[[DISK]DS-6a874575-5e2a-4b1d-8914-c70e465dba0e:NORMAL:hadoop5:50010|RBW], ReplicaUC[[DISK]DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152:NORMAL:hadoop6:50010|RBW], ReplicaUC[[DISK]DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb:NORMAL:hadoop1:50010|RBW]]} at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6457) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6525) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:911) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:963) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) at org.apache.hadoop.ipc.Client.call(Client.java:1455) at org.apache.hadoop.ipc.Client.call(Client.java:1392) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy16.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:903) at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy17.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy18.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1202) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:48:19,431 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop6,16020,1495538759899/aps-hadoop6%2C16020%2C1495538759899.default.1495590437010 block BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352] hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352 does not exist or is not under Constructionblk_1080771808_7056352{UCState=UNDER_RECOVERY, truncateBlock=null, primaryNodeIndex=0, replicas=[ReplicaUC[[DISK]DS-6a874575-5e2a-4b1d-8914-c70e465dba0e:NORMAL:hadoop5:50010|RBW], ReplicaUC[[DISK]DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152:NORMAL:hadoop6:50010|RBW], ReplicaUC[[DISK]DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb:NORMAL:hadoop1:50010|RBW]]} at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6457) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6525) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:911) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:963) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) at org.apache.hadoop.ipc.Client.call(Client.java:1455) at org.apache.hadoop.ipc.Client.call(Client.java:1392) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy16.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:903) at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy17.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy18.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1202) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:48:19,431 ERROR [Thread-3581] hdfs.DFSClient: Failed to close inode 9475890 org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-1810172115-hadoop2-1478343078462:blk_1080771808_7056352 does not exist or is not under Constructionblk_1080771808_7056352{UCState=UNDER_RECOVERY, truncateBlock=null, primaryNodeIndex=0, replicas=[ReplicaUC[[DISK]DS-6a874575-5e2a-4b1d-8914-c70e465dba0e:NORMAL:hadoop5:50010|RBW], ReplicaUC[[DISK]DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152:NORMAL:hadoop6:50010|RBW], ReplicaUC[[DISK]DS-3109a88b-277e-4d45-8cfb-8a5f3bf57adb:NORMAL:hadoop1:50010|RBW]]} at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6457) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6525) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:911) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:963) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) at org.apache.hadoop.ipc.Client.call(Client.java:1455) at org.apache.hadoop.ipc.Client.call(Client.java:1392) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy16.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:903) at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy17.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy18.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1202) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411) 2017-05-24 01:48:19,432 INFO [pool-2-thread-1] regionserver.ShutdownHook: Shutdown hook finished.