Member since
07-30-2019
3137
Posts
1565
Kudos Received
911
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
29 | 01-21-2025 06:16 AM | |
85 | 01-16-2025 03:22 PM | |
197 | 01-09-2025 11:14 AM | |
1139 | 01-03-2025 05:59 AM | |
480 | 12-13-2024 10:58 AM |
02-09-2017
02:51 PM
2 Kudos
@milind pandit The errors you are seeing would be expected during startup since your ZK will not establish quorum until all three nodes have completely started. As a node goes through its startup process it will being begin trying to establish zk quorum between all other zk nodes. Those other nodes may not be running yet if the other nodes are still starting as well, thus producing a lot of ERROR messages. Using the embedded zk is not recommended in a production environment since they are stopped and started along with NiFi. It is best to use dedicated external zookeepers in production. If the errors continue to persist even after all three nodes are fully running, check the below: 1. Verify that you have enabled the embedded zk on all three of your nodes. 2. Verify the zk nodes on each of your servers started and bound to the configured zk ports configured in your zookeepeer.properties file. 3. Make sure you are using resolvable hostnames for each of your zk nodes. 4. Make sure you do not have any firewalls that would prevent your NiFi nodes from being able to communicate between each other over the configured zk hostnames and ports. Thanks, Matt
... View more
02-09-2017
02:07 PM
1 Kudo
@mliem
Would you mind sharing your MergeContent processor's configuration? How large is the volume of tar files coming in to you flow? How many concurrent task do you have on your unpackContent? The reason I ask these questions is because they may all play a factor in why you are seeing the behavior you reported. My first thought would be that you have too few bins configured in your MergeContent processor. The MergeContent processor will start placing FlowFiles from the incoming queues in to bins based on the "Correlation Attribute Name" configured (Should be in your case "fragment.identifier"). If the MergeContent processor runs out of available bins unique bins, the oldest bin is merged. In you case since that oldest bin is incomplete (does not contain all fragments), it is routed to failure. For example you have Maximum number of bins configured to 10 and your incoming queue contains FlowFiles that we produced from more then 10 original tar files. It is possible that the Merge Content processor may be looking to create that 11th bin before all the FlowFiles that correlate to any of the other bins are processed. There are a few things you could try here (1 being most recommended and then bottom of list being the last thing I would try.): 1. Increase "Maximum number of bins" property in MergeContent. 2. Add the "OldestFlowFileFirstPrioritizer" to "Selected Prioritizers" list in the queue feeding your MergeContent. This will have a small impact on throughput performance. When UnpackContent splits your tar files all split files will have similar FlowFile creation timestamps. By setting the above prioritizer, FlowFiles will be placed in bins in timestamp order. If using this strategy, you would still need to have the number of bins set to the number concurrent tasks assigned to your UnpackContent processor plus one. 3. Decrease the "BackPressure Object Threshold" configuration on the incoming queue to the MergeContent processor. This is a soft limit. So lets say you have it set to 1000 and your Unpack Content untar resulted in 2000 FlowFiles, the queue would jump to 2000. The UnpackContent processor would then stop until that threshold dropped back below 1000. This would set few FlowFiles for your MergeContent processor to bin (meaning fewer needed bins). Thanks,
Matt
... View more
02-08-2017
09:55 PM
19 Kudos
What is Content Repository Archiving? There are three properties in the nifi.properties file that deal with the archiving on content in the NiFi Content Repository. The default NiFi values for these are shown below: nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true The purpose of content archiving is so that users can view and/ or replay content via the provenance UI that is no longer in their dataflow(s). The configured values do not have any impact on the amount of provenance history that is retained. If content associated to a particular provenance event no longer exists in the content archive, provenance will simply report to the user that the content is not available. The content archive is kept in within the same directory or directories where you have configured your content repository(s) to exist. When a "content claim" is archived, that claim is moved in to an archive subdirectory within the same disk partition where it originally existed. This keeps archiving from affecting NiFi's content repository performance with unnecessary writes that would be associated with moving archived Files to a new disk/partition for example. The configured max retention period tells NiFi how long to keep a archived "content claim" before purging it from the content archive directory. The configured max usage percentage tells NiFi at what point it should start purging archived content claims to keep the overall disk usage at or below the configured percentage. This is a soft limit. Let's say the content repository is at 49% usage. A 4GB content claim then becomes eligible for archiving. Once at time this content claim is archived the usage may exceed the configured 50% threshold. At the next checkpoint, NiFi will remove the oldest archived content claim(s) to bring the overall disk usage back or below 50%. So this value should never be set to 100%. The above two properties are enforced using an or policy. Whichever max occurs first will trigger the purging of archived content claims. Let's look at a couple examples: Example 1: Here you can see that are Content Repository has 35% of its disk consumed by Content Claims that are tied to FlowFiles still active somewhere in one or more dataflows on the NiFi canvas. This leaves 15% of the disk space to be used for archived content claims. Example 2: Here you can see that the amount of Content Claims still active somewhere within your NiFi flow has exceeded 50% disk usage in the content repository. As such you can see there are no archived content claims. The content repository archive setting have no bearing on how much of the content repository disk will be used by active FlowFiles in your dataflow(s). As such, it is possible for your content repository to still fill to 100% disk usage. *** This is the exact reason why as a best practice you should avoid co-locating your content repository with any of the other Nifi repositories. It should be isolated to a disk(s) that will not affect other applications or the OS should it fill to 100%. What is a Content Claim? I have mentioned "Content Claim" throughout this article. Understanding what a content claim will help you understand your disk usage. NiFi stores content in the content repository inside claims. A single claim can contain the content from 1 to many FlowFiles. The property that governs how a content claim is built are is found in the nifi.properties file. The default configuration value is shown below: nifi.content.claim.max.appendable.size=50 KB The purpose of content claims is to make the most efficient use of disk storage. This is especially true when dealing with many very small files. The configured max appendable size tells NiFi at what point should NiFi stop appending additional content to an existing content claim before starting a new claim. It does not mean all content ingested by NiFi must be smaller than 50 KB. It also does not mean that every content claim will be at least 50 KB in size. Example 1: Here you can see we have a single content claim that contains both large and small pieces of content. The overall size has exceeded the 10 MB max appendable size because at the time NiFi started streaming that final piece of content in to this claim the size was still below 10 MB. Example 2: Here we can see we have a content claim that contains only one piece of content. This is because once the content was written to this claim, the claim exceeded the configured max appendable size. If your dataflow(s) deal with nothing but files over 10 MB in size, all your content claims will contain only one piece of content. So when is a "Content Claim" moved to archive? A content claim cannot be moved into the content repository archive until none of the pieces of content in that claim are tied to a FlowFile that is active anywhere within any dataflow on the NiFi canvas. What this means is that the reported cumulative size of all the FlowFiles in your dataflows will likely never match the actual disk usage in your content repository. This cumulative size is not the size of the content claims in which the queued FlowFiles reside, but rather just the reported cumulative size of the individual pieces of content. It is for this reason that it is possible for a NiFi content repository to hit 100% disk usage even if the NiFi UI reports a total cumulative queued data size of less than that. Take Example 1 from above. Assuming the last piece of content written to that claim was 100 GB in size, all it would take is for one of those very small pieces of content in that same claim to still exist queued in a dataflow to prevent this claim from being archived. As long as a FlowFile still points at a content claim, that entire content claim can not be purged. When fine tuning your NiFi default configurations, you must always take into consideration your intended data. if you are working with nothing, but very small OR very large data, leave the default values alone. If you are working with data that ranges greatly from very small to very large, you may want to decrease the max appendable size and/or max flow file settings. By doing so you decrease the number of FlowFiles that make it into a single claim. This in turns reduces the likelihood of a single piece of data keeping large amounts of data still active in your content repository.
... View more
Labels:
02-08-2017
05:22 PM
@milind pandit Tell us something about your particular NiFi installation method:
1. Was this NiFi cluster installed via Ambari or command line?
2. Are you using NiFi internal zookeepers or external zookeepers?
Is this the entire stack trace from the nifi-app log?
... View more
02-07-2017
03:06 PM
@Raj B Unfortunately not, but I think that being able to customize the login screen with some user defined text is a cool idea. I suggest you create an Apache Jira for that enhancement idea. https://issues.apache.org/jira/secure/Dashboard.jspa The only other option is to create a unique label on the canvas of each of your environments. The drawback there is the banner is only visible within the process group it was created and if you template your entire flow, that template would be carried from cluster to cluster and the label would thus need to be updated.
Thanks, Matt
... View more
02-07-2017
02:38 PM
3 Kudos
@Raj B NiFi has an optional property in the nifi.properties file that allows you to place a banner at the top of your canvas: nifi.ui.banner.text= This banner remains visible no matter which process group the user is in. You could configure a unique banner for each of your environments. Thanks, Matt
... View more
02-07-2017
02:26 PM
1 Kudo
@Naresh Kumar Korvi The "Conditions" specified for your rule must result in a boolean "true" before the associated "Actions" will be applied against the incoming FlowFile. Your condition you have in the screenshot will always resolve to true... Looking at your "dirname" attribute, it is not going to return your desired directory path of: period1-year/p1-week1/date and your "filename" attribute will be missing the .json extension you are looking for as well: date.json I believe what you are trying to do is better accomplished using the below "Condition" and "Action" configurations: Condition: ${now():format('MM'):le(2):and(${now():format('dd'):le(25)})} dirname: period1-${now():format('yyyy')}/p1-${now():format('ww')}/${now():format('MM-dd-yyyy')} filename: ${now():format('MM-dd-yyyy')}.json Thanks, Matt
... View more
02-06-2017
09:27 PM
@Saurabh Verma How to change the JDK version for an existing cluster
Re-run Ambari Server Setup. ambari-server setup At the prompt to change the JDK, Enter y. Do you want to change Oracle JDK [y/n] (n)? y At the prompt to choose a JDK, Enter 1 to change the JDK to v1.8. [1] - Oracle JDK 1.8 [2] - Oracle JDK 1.7 [3] - Custom JDK If you choose Oracle JDK 1.8 or Oracle JDK 1.7, the JDK you choose downloads and installs automatically on the Ambari Server host. This option requires that you have an internet connection. You must install this JDK on all hosts in the cluster to this same path. If you choose Custom JDK , verify or add the custom JDK path on all hosts in the cluster. Use this option if you want to use OpenJDK or do not have an internet connection (and have pre-installed the JDK on all hosts). After setup completes, you must restart each component for the new JDK to be used.
Important You must also update your JCE security policy files on the Ambari Server and all hosts in the cluster to match the new JDK version. If you do not update the JCE to match the JDK, you may have issues starting services. Refer to the Ambari Security Guide for more information on Installing the JCE. Thanks, Matt
... View more
02-06-2017
04:07 PM
@Benjamin Hopp What reason if NiFi giving in the nifi-app.log for the node disconnections? Rather then restarting the node that disconnects, did you try just clicking the reconnect icon in the cluster UI? Verify that your nodes do not have trouble communicating with each other. Makes there are no firewalls between the nodes affecting communications to the HTTP/HTTPS ports:
nifi.web.http.host=nifi-ambari-08.openstacklocal
nifi.web.http.port=8090
nifi.web.https.host=nifi-ambari-08.openstacklocal
nifi.web.https.port=9091
or node communication port:
nifi.cluster.node.address=nifi-ambari-08.openstacklocal
nifi.cluster.node.protocol.port=9088
Make sure Both yo r nodes are properly configured to talk to ZK and neither has issues communicating with them: nifi.zookeeper.connect.string=nifi-ambari-09.openstacklocal:2181,nifi-ambari-07.openstacklocal:2181,nifi-ambari-08.openstacklocal:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.root.node=/nifi
nifi.zookeeper.session.timeout=3 sec
All of the above setting are in the nifi.properties file. Thanks, Matt
... View more
02-06-2017
01:28 PM
4 Kudos
@Naresh Kumar Korvi You will want to stick with the "Bin-Packing Algorithm" merge strategy in your case. The reason you are ending up with single files is because of the way the MergeContent processor is designed to work. There are several factors in play here: As the MergeContent processor will start the content of each new FlowFile on a new line. However, at times the incoming content of each FlowFile may be multiple lines itself. So it may be desirable to put a user defined "Demarcator" between the content of each FlowFile should you need to differentiate the content of each merge at a later time. If that is the case, the MergeContent processor provides a "Demarcator" property to accomplish this. An UpdateAttribute processor can be used following the MergeContent processor to set a new "filename" on the resulting merged FlowFile. I am not sure the exact filename format you want to use, but here is an example config that produce a filename like "2017-02-06": Thanks, Matt
... View more