Member since
07-30-2019
3391
Posts
1618
Kudos Received
1000
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 251 | 11-05-2025 11:01 AM | |
| 157 | 11-05-2025 08:01 AM | |
| 491 | 10-20-2025 06:29 AM | |
| 631 | 10-10-2025 08:03 AM | |
| 402 | 10-08-2025 10:52 AM |
06-01-2017
11:04 AM
1 Kudo
@Simran Kaur I see from your screenshot that your putHDFS processor is producing bulletins (Red square in upper right corner). If you float your cursor over this red square you will see the bulletin displayed. You can also look for this same error in the nifi-app.log. In many cases the error will be followed by a full stack trace in the nifi-app.log. That stack trace and the error log line may explain what you issue is here. If this does not help, please share your putHDFS processor configuration. Thanks, Matt
... View more
05-31-2017
09:36 PM
No problem... as soon as you added that you were running HDF 2.1.2, it helped.
... View more
05-31-2017
08:46 PM
@Oliver Meyn You have run in to the following bug: https://issues.apache.org/jira/browse/NIFI-3664 The good news is that the fix for this bug is part of HDF 2.1.3. Thank you, Matt
... View more
05-31-2017
04:49 PM
2 Kudos
@Oliver Meyn In NiFi cluster, the time that is displayed could come form anyone of the connected nodes. It is important to use NTP on every node in your NiFi cluster to make sure that time stays in sync. As far as timezone differences go, Make sure the symlink for /etc/localtime is pointing at the same /usr/share/timezone/... file on every one of you nIfi nodes. Run the "date --utc" command on all your nodes and compare it to both of the following commands: zdump /usr/share/zoneinfo/EST
zdump /usr/share/zoneinfo/US/Eastern If you are looking for EDT time you need to make sure that the symlink for /etc/localtime is point to the following on all yoru nodes: lrwxrwxrwx. 1 root root 25 Dec 1 2014 localtime -> ../usr/share/zoneinfo/US/Eastern Thanks, Matt
... View more
05-31-2017
01:51 PM
3 Kudos
@Simran Kaur All ports 1024 and below are considered reserved as privileged ports and can be only used/bound to by process run by the root user. NiFi can use these ports if it is running as the root user. The alternative is to setup your ListenHTTP processor to run on a non privileged port and then set up port forwarding in your iptables to redirect incoming requests to a privileged port to that non privileged port you are using in NiFi: iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8081 Thanks,
... View more
05-31-2017
01:44 PM
2 Kudos
@Richard Corfield You can get GC output logs out of NiFi by adding the following lines to NiFi's bootstrap.conf file: java.arg.20=-XX:+PrintGCDetails
java.arg.21=-XX:+PrintGCTimeStamps
java.arg.22=-XX:+PrintGCDateStamps
java.arg.23=-Xloggc:<file>
The last entry allows you to specific a separate log file for this output to be written in to rather then stdout. Thanks, Matt
... View more
05-31-2017
12:58 PM
@Joshua Adeleke Are you able to add new controller services?
... View more
05-30-2017
05:21 PM
1 Kudo
Your FGC is not very high at only 4. Do you still have issues if you restart your cluster with all components in a stopped state? This would at least show that your issue is dataflow related. To start with all components stopped, edit the nifi.properties file on all nodes and change the following property to "false" nifi.flowcontroller.autoResumeState=true Once you cluster is back up and you can access the UI, check for anywhere in your stopped flow where you have queued data (look at largest queues first). Start these flows first and look at impact they are having on GC and Heap memory usage. Another possibility is that you are are oversubscribing to your available resources. Use the "top" command to observer impact on your system's resources (Namely CPU). Make sure you have not allocated to many concurrent tasks to your individual processors. Make sure you have not set your "max timer driven thread count" and "max event driven thread count" values to high. These values should be set to 2-4 times the number of cores you have in one node. (fro example if you have a 4 node cluster and each node in your cluster has 16 cores, the max values set should be between 32 and 64 (max 2-4 times 16 cores). Thanks, Matt
... View more
05-30-2017
03:34 PM
@Xi Sanderson The above log line is not an indication of a problem. You can use the jstat command to look at your GC stats: For example the following command will print GC stats every 250 ms for 1000 lines: /<path to Java>/bin/jstat -gcutil <nifi pid> 250 1000
Column Description S0 Survivor space 0 utilization as a percentage of the space's current capacity. S1 Survivor space 1 utilization as a percentage of the space's current capacity. E Eden space utilization as a percentage of the space's current capacity. O Old space utilization as a percentage of the space's current capacity. P Permanent space utilization as a percentage of the space's current capacity. YGC Number of young generation GC events. YGCT Young generation garbage collection time. FGC Number of full GC events. FGCT Full garbage collection time. GCT Total garbage collection time. What version of NiFi and what version of Java are you running? Thanks, Matt
... View more
05-30-2017
02:57 PM
@Xi Sanderson The most common reason for not seeing Heartbeats generated on the configured interval is when you have a lot of java full garbage collection (GC) going on. Garbage collection is a stop the world event. The JVM will do nothing else until the garbage collection has completed. Partial garbage collection is normal and healthy and stop the world events here should be extremely short in duration. You can look at the GC stats fro your NIFi cluster via the system diagnostics UI. You can get tho this UI via link in the lower right corner of the NiFi summary UI: Young is normal to see, but you want to see hopefully 0 Old Gen events. If GC is your issue, you will want to figure out what has changed. New data types or volumes, new dataflows, etc... Thanks, Matt
... View more