Member since
05-02-2019
4
Posts
1
Kudos Received
0
Solutions
05-03-2019
09:28 PM
Digging deeper into the logs... Heap par new generation total 153024K, used 28803K [0x00000006f3400000, 0x00000006fda00000, 0x000000071cd90000) eden space 136064K, 13% used [0x00000006f3400000, 0x00000006f4603360, 0x00000006fb8e0000) from space 16960K, 61% used [0x00000006fb8e0000, 0x00000006fc2fda00, 0x00000006fc970000) to space 16960K, 0% used [0x00000006fc970000, 0x00000006fc970000, 0x00000006fda00000) concurrent mark-sweep generation total 339968K, used 8088K [0x000000071cd90000, 0x0000000731990000, 0x00000007c0000000) Metaspace used 48711K, capacity 49062K, committed 49548K, reserved 1093632K class space used 5568K, capacity 5682K, committed 5800K, reserved 1048576K ==> /var/log/hadoop-yarn/embedded-yarn-ats-hbase/hbase-yarn-ats-master-ip-11-0-1-167.log <== at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1457615217-11.0.1.167-1556905515643:blk_1073743532_2710 file=/atsv2/hbase/data/MasterProcWALs/pv2-00000000000000000002.log at org.apache.hadoop.hdfs.DFSInputStream.refetchLocations(DFSInputStream.java:870) at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:853) at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:832) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:564) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:754) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:820) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:678) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:253) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:275) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:280) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49) at org.apache.hbase.thirdparty.com.google.protobuf.GeneratedMessageV3.parseDelimitedWithIOException(GeneratedMessageV3.java:347) at org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos$ProcedureWALHeader.parseDelimitedFrom(ProcedureProtos.java:4707) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.readHeader(ProcedureWALFormat.java:156) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.open(ProcedureWALFile.java:84) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLog(WALProcedureStore.java:1374) ... 8 more 2019-05-03 19:19:04,948 INFO [Thread-16] regionserver.HRegionServer: ***** STOPPING region server 'ip-11-0-1-167.us-east-2.compute.internal,17000,1556911123475' ***** 2019-05-03 19:19:04,948 INFO [Thread-16] regionserver.HRegionServer: STOPPED: Stopped by Thread-16
... View more
05-03-2019
07:21 PM
I'm experiencing the same issue. It appears on a fresh cluster after I add a separate config group for datanodes. The datanode accidently got configured with the same memory limits as the master, which may have caused a out of memory error. Afterwards, I am unable to get the TimelineServicev2.0 to start (which is running on master). I've tried reinstalling the cluster, but the same issue appears eventually. I don't have a solution yet.
... View more