Member since
07-30-2019
3427
Posts
1632
Kudos Received
1011
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 69 | 01-27-2026 12:46 PM | |
| 482 | 01-13-2026 11:14 AM | |
| 983 | 01-09-2026 06:58 AM | |
| 898 | 12-17-2025 05:55 AM | |
| 959 | 12-15-2025 01:29 PM |
06-01-2017
02:40 PM
@Alvaro Dominguez Multiple nodes can write to the same path in HDFS, but not the same file at the same time. The lease error you saw above is moist likely the result of one node completing writing .example_1202.zip and then renaming it example_1202.zip. In between that time, a different node saw and then tried to start appending to .example_1202.zip, but it was moved/renamed before that could happen. It essentiall becomes a race condition since nodes do not communicate thsi kind of information with one another. You could write 4 zip files to HDFS every minute. You could just name each filename uniquely based on NiFi hostname writing file. Thanks,
Matt
... View more
06-01-2017
01:53 PM
1 Kudo
@Alvaro Dominguez Every node in a NiFi cluster runs its own copy of the cluster flow, has its own repositories, and works on its own set of FlowFiles. Nodes in a NiFi cluster are unaware of any FlowFiles being processed by other nodes in the cluster. What you are seeing is normal expected behavior of your dataflow. Thanks, Matt
... View more
06-01-2017
01:13 PM
@Oleksandr Solomko Any other ERROR or WARN log messages? --- Is this a standalone NiFi installation or a multi-node NiFi cluster? - If cluster, are all these FlowFiles queued on just one node? --- Is this Nifi secured (HTTPS or HTTP)? I can't reproduce locally. Thanks,
Matt
... View more
06-01-2017
11:45 AM
@Oleksandr Solomko There must be something else going on in your system. Are you seeing any WARN or ERROR log messages in your nifi logs? Did you run out of disk space at any time? Are you seeing any Out Of Memory (OOM) errors in your nifi logs? Thanks, Matt
... View more
06-01-2017
11:23 AM
@Simran Kaur I had a feeling your issue was related to a missing config. Glad to hear you got it working. If this answer addressed your original question, please mark it as accepted. As far as your other question goes, I see you already started a new question (https://community.hortonworks.com/questions/105720/nifi-stream-using-listenhttp-processor-creates-too.html). That is the correct approach in this forum, we want to avoid asking unrelated questions in the same post. I will have a look at that post as well. Thank you, Matt
... View more
06-01-2017
11:12 AM
@Oleksandr Solomko
What version of NiFi are you running? Are you seeing any Out Of Memory errors in your nifi logs? This could be causing issues with emptying the queue. The fastest way to clear this specific queue now might be to stop both the SplitAVRO and PublishKafka processors. Add an UpdateAttribute processor to your graph with the success relationship set to auto-terminate. Select the queued connection and drag the blue dot (near the arrow end of the connection) to the update Attribute processor. Start the updateAttribute processor and it will start auto-terminating FlowFiles form this connection in batches. Thanks, Matt
... View more
06-01-2017
11:04 AM
1 Kudo
@Simran Kaur I see from your screenshot that your putHDFS processor is producing bulletins (Red square in upper right corner). If you float your cursor over this red square you will see the bulletin displayed. You can also look for this same error in the nifi-app.log. In many cases the error will be followed by a full stack trace in the nifi-app.log. That stack trace and the error log line may explain what you issue is here. If this does not help, please share your putHDFS processor configuration. Thanks, Matt
... View more
05-31-2017
09:36 PM
No problem... as soon as you added that you were running HDF 2.1.2, it helped.
... View more
05-31-2017
08:46 PM
@Oliver Meyn You have run in to the following bug: https://issues.apache.org/jira/browse/NIFI-3664 The good news is that the fix for this bug is part of HDF 2.1.3. Thank you, Matt
... View more
05-31-2017
04:49 PM
2 Kudos
@Oliver Meyn In NiFi cluster, the time that is displayed could come form anyone of the connected nodes. It is important to use NTP on every node in your NiFi cluster to make sure that time stays in sync. As far as timezone differences go, Make sure the symlink for /etc/localtime is pointing at the same /usr/share/timezone/... file on every one of you nIfi nodes. Run the "date --utc" command on all your nodes and compare it to both of the following commands: zdump /usr/share/zoneinfo/EST
zdump /usr/share/zoneinfo/US/Eastern If you are looking for EDT time you need to make sure that the symlink for /etc/localtime is point to the following on all yoru nodes: lrwxrwxrwx. 1 root root 25 Dec 1 2014 localtime -> ../usr/share/zoneinfo/US/Eastern Thanks, Matt
... View more