Member since
05-02-2019
4
Posts
1
Kudos Received
0
Solutions
08-06-2019
04:30 PM
I have Nifi adminstered by HDF 3.4.1.1, running on a single node. The nifi-app.log has stopped updating ls /var/log/nifi2
nifi-app_2019-07-07_10.0.log nifi-app_2019-07-07_15.0.log nifi-app_2019-07-07_20.0.log nifi-setup.log nifi-user_2019-06-20.log nifi-user_2019-06-28.log
nifi-app_2019-07-07_11.0.log nifi-app_2019-07-07_16.0.log nifi-app_2019-07-07_21.0.log nifi-user_2019-06-07.log nifi-user_2019-06-24.log nifi-user_2019-07-03.log
nifi-app_2019-07-07_12.0.log nifi-app_2019-07-07_17.0.log nifi-app.log nifi-user_2019-06-10.log nifi-user_2019-06-25.log nifi-user_2019-07-04.log
nifi-app_2019-07-07_13.0.log nifi-app_2019-07-07_18.0.log nifi-bootstrap_2019-06-27.log nifi-user_2019-06-11.log nifi-user_2019-06-26.log nifi-user_2019-07-05.log
nifi-app_2019-07-07_14.0.log nifi-app_2019-07-07_19.0.log nifi-bootstrap.log nifi-user_2019-06-19.log nifi-user_2019-06-27.log nifi-user.log
There are nifi flows actively running since 07-07. How can I get logs flowing again and debug this issue? The last entry in the nifi-app.log is: 2019-07-07 22:59:59,544 WARN [Timer-Driven Process Thread-14] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ListS3
... View more
Labels:
- Labels:
-
Apache NiFi
05-03-2019
09:28 PM
Digging deeper into the logs... Heap par new generation total 153024K, used 28803K [0x00000006f3400000, 0x00000006fda00000, 0x000000071cd90000) eden space 136064K, 13% used [0x00000006f3400000, 0x00000006f4603360, 0x00000006fb8e0000) from space 16960K, 61% used [0x00000006fb8e0000, 0x00000006fc2fda00, 0x00000006fc970000) to space 16960K, 0% used [0x00000006fc970000, 0x00000006fc970000, 0x00000006fda00000) concurrent mark-sweep generation total 339968K, used 8088K [0x000000071cd90000, 0x0000000731990000, 0x00000007c0000000) Metaspace used 48711K, capacity 49062K, committed 49548K, reserved 1093632K class space used 5568K, capacity 5682K, committed 5800K, reserved 1048576K ==> /var/log/hadoop-yarn/embedded-yarn-ats-hbase/hbase-yarn-ats-master-ip-11-0-1-167.log <== at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1457615217-11.0.1.167-1556905515643:blk_1073743532_2710 file=/atsv2/hbase/data/MasterProcWALs/pv2-00000000000000000002.log at org.apache.hadoop.hdfs.DFSInputStream.refetchLocations(DFSInputStream.java:870) at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:853) at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:832) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:564) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:754) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:820) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:678) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:253) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:275) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:280) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49) at org.apache.hbase.thirdparty.com.google.protobuf.GeneratedMessageV3.parseDelimitedWithIOException(GeneratedMessageV3.java:347) at org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos$ProcedureWALHeader.parseDelimitedFrom(ProcedureProtos.java:4707) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.readHeader(ProcedureWALFormat.java:156) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.open(ProcedureWALFile.java:84) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLog(WALProcedureStore.java:1374) ... 8 more 2019-05-03 19:19:04,948 INFO [Thread-16] regionserver.HRegionServer: ***** STOPPING region server 'ip-11-0-1-167.us-east-2.compute.internal,17000,1556911123475' ***** 2019-05-03 19:19:04,948 INFO [Thread-16] regionserver.HRegionServer: STOPPED: Stopped by Thread-16
... View more
05-03-2019
07:21 PM
I'm experiencing the same issue. It appears on a fresh cluster after I add a separate config group for datanodes. The datanode accidently got configured with the same memory limits as the master, which may have caused a out of memory error. Afterwards, I am unable to get the TimelineServicev2.0 to start (which is running on master). I've tried reinstalling the cluster, but the same issue appears eventually. I don't have a solution yet.
... View more