Member since
07-30-2019
3387
Posts
1617
Kudos Received
998
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 343 | 10-20-2025 06:29 AM | |
| 483 | 10-10-2025 08:03 AM | |
| 345 | 10-08-2025 10:52 AM | |
| 377 | 10-08-2025 10:36 AM | |
| 402 | 10-03-2025 06:04 AM |
01-20-2017
04:35 PM
@Balakrishnan Ramasamy The behavior of the input dialog box is not configurable. Once you apply the password, it is encrypted and stored in the flow.xml.gz file. If any user opens the dialog box the users is simply informed that a sensitive property has been set. It is not possible to retrieve the plain text unencrypted version of the password after it has been applied.
If you can, please accept the answer if it addressed your initial question. Thanks, Matt
... View more
01-20-2017
03:05 PM
3 Kudos
@Michal R You can use an UpdateAttribute processor to change the filename. However, this would end up with every file having the same filename. Assuming each input filename is unique excpet has a .csv extension, you could do the following:
This would essentially replace the .csv extension with .avro while leaving the rest of the original filename unchanged. Thanks, Matt
... View more
01-20-2017
02:32 PM
2 Kudos
@bala krishnan When templates are added to the canvas the components are always in a stopped state. In some cases those processors or components may even be marked as invalid because they need one or more properties set. This occurs when the component contains required sensitive properties or the processor in the deployed instance is a newer version that has had additional required properties added to it. NiFi templates when created are sanitized of any sensitive properties. This is done because the encryption of these sensitive properties id tied directly to the specific sensitive props key the users creates for each NiFi installation. The PutHiveSQL processor has a dependency on the HiveConnectionPool controller service. If this controller service cannot be started then the processor will also not be able to start. Take a look at the following: https://github.com/aperepel/nifi-api-deploy You may be able to use this approach to automate deployment and starting of you template. Matt
... View more
01-19-2017
04:03 PM
@David Arllen You have an interesting scenario here. NiFi's design with this regard is with the expectation that when you build a dataflow, any node in the cluster is expected to be able to run that dataflow. This ensure true failover capability specific to NIfi node failure. In your case, i am understanding you looking for failover at a dataflow level. So GetHTTP is configured to run "on primary node" only. Lets say DC_B is your current primary node and everything is working fine. At some point the GetHTTP starts failing to connect to the source. What you want is NiFi to detect this and switch the primary node designation to another node I the cluster in hopes that the GetHTTP processor will still be successful there. Two things immediately come to mind: 1. A dataflow may have a number of processors in different dataflows all using the "on primary node" execution. If NiFi were to switch primary node because just one of those processors was failing, it would switch for all processors. 2. What if the source was truly down and none of your nodes could connect to it. NiFi would continue to switch from node to node looking to eventually hit a node that can once again connect? This would have an affect on the other primary node flows that are working. Or am i off and you are not talking about a NiFi cluster that spans multiple data centers, but rather 3 data centers all running standalone NiFi instances? If this is the case you may be able to have a process monitor the NiFi app log for getHTTP failures and use curl to stop the GetHTTP processor on DC_B and call it to start on DC_C. You however lose your centralized management of you dataflow this way. Bottom line is that this feature does not exist in NiFi currrently. Matt
... View more
01-18-2017
01:58 PM
@Anubhav Raikar I think we have a terminology issue here. :)
A NiFi FlowFile consists of two parts, the FlowFile attributes (Metadata about the FlowFile) and the FlowFile Content (In you case this would be the actual JSON). NiFi FlowFile attributes consist of key value pairs (A property name and an associated value). So if i am understanding you correctly, you have a FlowFile which has a FlowFile Attribute of property name of "CREDIT_FLAG" and a value of "C". You are looking for a way to change the FlowFile attribute Property name from "CREDIT_FLAG" to "CR_FL". You mention a criteria the "CR_FL" also exists in your flow? Are you saying you expect "CR_FL" to also exist in this same FlowFile or some other FlowFile elsewhere in your NiFi dataflow? NiFI processors interact with the FlowFile attributes and contents of a single FlowFile and do not cross reference with other processors or FlowFiles. I cannot think of any processors that would allow you to dynamically create FlowFile attribute property names. Property names do accept Expression language. I am trying to understand the value of a dynamically created FlowFile Attribute property name. What use would that attribute have later in your dataflow. since it is dynamically created, how would you reference later in some other Expression language statement? FlowFile Attributes are not part of the FlowFile content that is written out when the FlowFile is processed by a putFile or putSFTP. They can be referenced as headers and such by processors like putSQL or invokeHTTP, but how would you do that if you don't know what the FlowFile attribute property name is? Matt
... View more
01-18-2017
01:09 PM
Can you provide a little more detail on your use case? Are you trying to dynamically create new FlowFile Attributes or are you trying to update JSON tag names in the FlowFile content's JSON itself?
The updateAttribute processor works with FlowFile Attributes using the expression language. The FlowFile's content is not read by this processor.
... View more
01-17-2017
02:38 PM
2 Kudos
@Sebastian Carroll Hard to say what any custom code is doing as far as heap usage, but some existing processors can use considerable heap space. I would say that FlowFiles attributes consume the majority of the heap space in most cases. A FlowFile consists of two parts, the FlowFile Content (which lives in the NiFi content repository) and the FlowFile Attributes (This is metadata about the FlowFile and lives in heap [1] ). While generally speaking the amount of heap that FlowFile Attributes consumes is relatively small, users can build flows that have the exact opposite affect. If the user's dataflow uses processors to read large amounts of content and write it to NiFi FlowFile Attributes, heap usage will go up rapidly. If users allow large connection queues to build within the dataflow, heap usage will go up.
- Evaluate available system memory and the configured heap size for your NiF. The heap defaults for NiFi are relatively small. They are set in the bootstrap.conf and have default values of only 512 MB min and max. This is generally to small for any significant dataflow. I recommend setting both min and max values to the same value. Adjust these values according to available free memory on your system with out going to crazy. Try 4096MB and see how that performs first. Adjusting heap setting will require a nifi restart to take affect. - Evaluate your dataflow for areas where high connection queues exist. Setting backpressure through out your dataflow is one way to keep queues from growing to large. - Evaluate your flow for anywhere where you may be extracting content form your FlowFiles in to FlowFile attributes. IS it necessary or can the amount of content extracted be reduced. - Processors like mergeContent, SplitContent, SplitText, etc can use a lot of heap depending on the incoming FlowFile(s) and configuration. For example a mergeContent configured to merge 100,000 FlowFiles is going to use a lot of heap bining that many FlowFiles. A better approach is to use to mergeContent processor in a row with the first merging 10,000 and the second merging bundles of 10 again to create the 100,000 desired end result. Same goes for SplitText. If your source FlowFile results in excess of 10,000 splits, try using two SplitText processors (First splitting by every 10,000 lines and the second splitting those by every line.) With either example above you are reducing he amount of FlowFiles held in heap memory at any given time. Notes: [1] -- NiFi uses FlowFile swapping to help reduce heap usage. FlowFile attributes live in heap memory for faster processing. If a connection exceeds the configured swap threshold (default 10,000 set in nifi.properties), NiFi begins swapping out FlowFile attributes to disk. One must remember that this swapping is per connection. This swapping is not based on any heap usage but rather by object thresholds so values may need to be adjusted based on average FlowFile Attribute size.
... View more
01-12-2017
11:17 PM
@Adda Fuentes no problem
... View more
01-12-2017
11:02 PM
1 Kudo
@Raj B You can think of the "Max Bin Age" as the trump card. Regardless of any other min criteria being met, the bin will be merged once it reaches this max age. So you assumption is completely correct. That aside, you need to take heap usage into consideration with this dataflow design you have here. FlowFile attributes (metadata) lives in heap memory space for performance issues. So as you are bining these FlowFiles throughout the day, your JVM heap usage is going to grow and grow. So how many FlowFiles per day are you talking about here? If you are talking in excess of 10,000 FlowFiles, you may need to adjust your dataflow some. For example use two mergeContent processors back to back. The first merges at lets say a max bin age of 5 minutes. Then the second merges those bundles into a large 24 hour bundle. So 1 new FlowFile is created every 5 minutes and then those 288 merged FlowFiles are merged into a larger FlowFile in the second mergeContent. Doing it this greatly reduces the heap usage. Of course depending on volumes you may need to even merge more often then 5 minutes to achieve optimal heap usage. Just some food for thought.... Matt
... View more
01-12-2017
10:51 PM
1 Kudo
@Adda Fuentes NiFi Authentication always defaults to TLS certificates. If the user does not present a user certificate then NiFI will fall over to the alternate configured login identifier (either LDAP or Kerberos). NiFi does not support specifying more then one of these alternate login identity providers (ldap-provider or kerberos-provider) at a time. Current versions of NiFi have also added Spnego support for user authentication. This authentication when configured in the nifi.properties file falls between user certificates and any login-identity-providers configured in the login-identity-providers.xml file. Setting up Spnego will require configuration changes to your browser to support logging in without needing to use username an password as you would with the kerberos-provider. See below for more details on setting up Spnego for user authentication: http://bryanbende.com/development/2016/08/31/apache-nifi-1.0.0-kerberos-authentication The Identity mapping patterns allow you to take the DN returned by LDAP or the users certificate and map it to a different value. This makes it easier to setup user authorizations since you will only need to provide that mapped value as the user name for the authorization instead of the full DN. The Kerberos pattern mapping has similar intent. So you may use pattern mapping to remove the @domain portion of the principal. Matt
... View more