Member since
07-30-2019
3414
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 397 | 12-17-2025 05:55 AM | |
| 458 | 12-15-2025 01:29 PM | |
| 480 | 12-15-2025 06:50 AM | |
| 387 | 12-05-2025 08:25 AM | |
| 632 | 12-03-2025 10:21 AM |
03-22-2016
02:27 PM
There have also been many improvements to the underlying code for the Kafka processors in newer releases of NiFi. I recommend upgrading.
... View more
03-22-2016
02:25 PM
5 Kudos
This ERROR messages is informing you that the configured buffer in your putKafka processor was not large enough to accommodate the batch of files it wanted to transfer to Kafka. So the log above shows that a batch of 3 files was created, 2 of the files from that batch transferred successfully, and 1 file was routed to the putKafka's failure relationship. The total size of the batch was recorded as 4294967296 (4GB). These are very large files for Kafka... The Failure relationship should be looped back on to the putKafka processor so after a short penalization, the failed file will get re-transmitted. There are 4 settings at play here in the putKafka processor you will want to play around with.
Max Buffer Size: <-- max amount of reserved buffer space Max Record Size: <-- max size of any one record Batch Size: <-- max number of records to batch Queue Buffering Max Time: <--- max amount of time spent on batching before transmitting. *** The batch will be transmitted when either the Batch Size is satisfied or Queue Buffering Max time is reached. Considering the size of the messages you are trying to send to your Kafka topic, I would recommend the following settings:
Max Buffer Size: 2 GB Max Record Size: 2 GB Batch Size: 1 Queue Buffering Max Time: 100 ms
Since you will be sending one file at a time, you may want to increase the number of Concurrent Tasks configured on the "Scheduling" tab of the putKafka processor. Only do this if the processor can not keep up with the flow of data. So start with the default of 1 and increase by only 1 at a time if needed. Keep in mind that the buffered records live in your JVM heap, so the the more concurrent tasks and the larger the Max Buffer Size configuration, the more heap this processor will use.
Thanks, Matt
... View more
03-16-2016
10:13 PM
The NCM in a NiFi cluster typically needs more heap memory. The number of components (processors, input ports, output ports and relationships) x the number of nodes in the NiFi cluster on the graph will drive how much memory your NCM will need. For ~300 - 400 components and 3 - 4 node cluster, the NCM seems pretty good with 8GB of heap. If you encounter heap issue still, you would need to increase the heap size and/or reduce the stat buffer size and/or frequency in the nifi.properties files (NCM and Nodes). nifi.components.status.repository.buffer.size=360 (defaults is 1440) nifi.components.status.snapshot.frequency=5 min (default is 1) This information is accurate as of NiFi 0.5.1 and HDF 1.1.2.
... View more
03-15-2016
07:33 PM
1 Kudo
for you scenario with 12 disks (assuming all disk are 200 GB)
You can specify/define multiple Content repos and multiple Provenance repos; however, you can only define one FlowFile repository and one database repository.
- 8 disks for Content repos:
- /cont_repo1 <-- 200 GB
- /cont_repo2 <-- 200 GB
- /cont_repo3 <-- 200 GB
- /cont_repo4 <-- 200 GB
- /cont_repo5 <-- 200 GB
- /cont_repo6 <-- 200 GB
- /cont_repo7 <-- 200 GB
- /cont_repo8 <-- 200 GB
- 2 disks for Provenance repos:
- /prov_repo1 <-- 200 GB
- /prov_repo2 <-- 200 GB
- 1 disk split into multiple partitions for:
- /var/log/nifi-logs/ <-- 100 GB
-
OS partitions <-- split amongst other Standard OS (/tmp, /, etc...)
- 1 disk split into multiple partitions for:
- /opt/nifi <-- 50 GB
- /flowfile_repo/ <-- 50 GB
- /database_repo/ <-- 25 GB
- /opt/configuration-resources <-- 25 GB (this will hold any certs, config files, extras your NiFi processors/ dataflows may need).
... View more
03-15-2016
07:23 PM
6 Kudos
There is no direct correlation between the size of the content repository and the provenance repository. The size the content repository will grow to is directly tied to the amount of unique content that is currently queued on the NiFi canvas. If archive is enabled the amount of content repository space consumed will depend on the archive configuration settings in the nifi.properties file. nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=75% nifi.content.repository.archive.enabled=true As you can see from the above archive will try to retain 12 hours of archived content (archived content being content that is no longer associated to an existing queued FlowFile on within any dataflow on the graph. This does not guarantee that there will be any archive or that the content repository will not grow beyond 75% disk utilization. Content still actively associated to queued FlowFiles will remain in the Content repository. So it is important to build in back pressure in to dataflows where there is concern that large backlogs could trigger disk to fill to 100%. Should Content repo fill to 100% corruption will not occur. New FlowFiles will not be able to be created until free space is available. This is likely to produce a lot of errors in the flow (anywhere content is modified/written). Provenance repository size is directly related to the number of FlowFiles and the number of event generating processors those events pass through on the NiFi canvas. In the case of disk utilization here, it is very controlled by setting in the nifi.properties file: nifi.provenance.repository.max.storage.time=7 days nifi.provenance.repository.max.storage.size=50 GB With the above settings, NiFi will try to retain 7 days of provenance events on every FlowFile that it processes, but will start rolling off the oldest events once the max storage exceeds 50 GB. It is important to understand that the 75% and 50GB are soft limits and should never be set to 100% or the exact size of the disk. FlowFile Repository and database repository each remain relatively small. The FlowFile repository is the most important repo if all. It should be isolated on a separate disk/partition that is not shared with any other process that may fill it. allowing the FlowFile repository disk to fill to 100% can lead to database corruption and lost data. for a 200 GB Content repository, a ~25 GB FlowFile repo should be enough. The database repository contains the user and change history DBs. The user db will remain 0 bytes in size for NiFi instances running http (non-secure). For those instances running https (Secure), the user db will track all users who log in to the UI. The change history db is tied to the little clock icon in the upper right corner NiFi tool bar. It keeps track of all changes made on the NiFi graph/canvas. It also stays relatively small. A few GB of space should be plenty to store a considerable number of changes.
... View more
03-15-2016
12:05 PM
1 Kudo
@Lubin Lemarchand you are correct. Thank you for filling in the details.
... View more
03-14-2016
03:52 PM
3 Kudos
Here is a basic sizing chart for HDF: *** But you must keep in mind that these requirements may grow depending on what processors you use in your dataflow. Memory need is often one that grows quicker then CPU need. *** Also understand that these sizing scenarios are based upon setting up your NiFi instance(s) per the best practice documentation provided.
... View more
03-14-2016
01:28 PM
1 Kudo
Shishir, I agree that you should be carefully reviewing all the documented links provided by Artem Ervits, but you also need to understand the loading behavior of any given NiFI instance is directly tied to what processors are being used. While some processors exhibit little impact to CPU and/or memory, others can impact those things significantly. Capacity planning needs to take in to consideration the dataflows you want to run. What kind of data content manipulation you want to do (MergeContent, SplitContent, ReplaceContent, etc...), data sizes and volumes, how many NiFi nodes and how you plan to distributed the data load, etc...
... View more
03-09-2016
12:42 PM
5 Kudos
I am assuming you are using the InvokeHTTP processor and that you want to use one of the new attributes created on your FlowFile in response to the request for adding to the content of the same Flowfile. You will want to make sure you have the "Put Response Body in Attribute" property configured in the InvokeHTTP processor. You can then use the ReplaceText processor with an Evaluation Mode of Entire text and Replacement Strategy of Append. This will allow you to write a NiFi Expression Language statement that uses the attribute you specified for the response body containing the return and append it to your original json content.
... View more
02-16-2016
04:27 PM
1 Kudo
@cokorda putra susila NiFi already includes the HDFS core libraries. So no need to install Hadoop on the NiFi server. Just need to the config files (i.e - core-site.xml) as Artem suggests.
... View more