Member since
07-30-2019
3414
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 401 | 12-17-2025 05:55 AM | |
| 462 | 12-15-2025 01:29 PM | |
| 482 | 12-15-2025 06:50 AM | |
| 387 | 12-05-2025 08:25 AM | |
| 637 | 12-03-2025 10:21 AM |
01-30-2017
01:21 PM
4 Kudos
@Saminathan A The PutSQL processor expects that each FlowFile contains a single SQL statement and does not support multiple insert statements as you have tried above. You can have the GetFile Processor route its success relationship twice with each success going to its own ReplaceText processor. Each ReplaceText processor is then configured to create either the "table_a" or "table_b" insert statement. The success from both ReplaceText processors could then be routed to the same PutSQL processor. Thanks, Matt
... View more
01-30-2017
01:11 PM
2 Kudos
@Joshua Adeleke The message you are seeing here indicates that your NiFi instance has been setup as a cluster (possibly a 1 node cluster). NiFi cluster configurations require zookeeper in order handle cluster coordinator elections and cluster state management. If you truly only want a standalone NiFi installation with no dependency on these things, you need to make sure the following property in your nifi.properties file is set to false: nifi.cluster.is.node=false Even as a single node NIFi cluster, if zookeeper (internal or external) was setup, the election should complete eventually and the node will then become accessible. How long it takes for the cluster coordinator to get elected is controlled by the following lines in your nifi.properties file: nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
The above shows the defaults. If the max.candidates is left blank (Normally set to the number of nodes in your NiFi cluster), the election process will take the full 5 minutes to complete before the UI will become available. If max.candidates is set, then election will complete if either all nodes check in or 5 minutes (whichever occurs first). Thanks, Matt
... View more
01-27-2017
08:53 PM
7 Kudos
With HDF 2.x, Ambari can be used to deploy a NiFi cluster. Lets say you deployed a 2 node cluster and want to go back at a later time and add an additional NiFi node to the cluster. While the process is very straight forward when your NiFi cluster has been setup non-secure (http), the same is not true if your existing NiFi cluster has been secured (https). Below you will see an existing 2 node secured NiFi cluster that was installed via Ambari: STEP 1: Add new host through Ambari. You can skip this step if the host you want to install the additional NiFi node on is already managed by your Ambari. STEP 2: Under "Hosts" in Ambari click on the host form the list where you want to install the new NiFi node. The NiFi component will be in a "stopped" state after it is installed on this new host. *** DO NOT START NIFI YET ON NEW HOST OR IT WILL FAIL TO JOIN CLUSTER. *** STEP 3: (This step only applies if NiFi's file based authorizer is being used) Before starting this new node we need to clear out some NiFi Configs. This step is necessary because of how the NiFi application starts. When NiFi starts it looks for the existence of a users.xml and authorizations.xml files. If they do not exist, it uses the configured "Initial Admin Identity" and "Node identities (1,2,3, etc...)" to build the users.xml and authorizations.xml files. This causes a problem because your existing clusters users.xml and authorizations.xml files likely contain many more entires by now. Any mismatches in these files will prevent a node from being able to join the cluster. If these configurations are not present, the new node will grab them from the cluster it joins. Below shows what configs need to be cleared in NiFi: *Note: Another option is to simply copy the users.xml and authorizations.xml files from an existing cluster node to the new node before starting the new node. STEP 4: (Do this step if using Ambari metrics) When a new node is added by Ambari and Ambari metrics are also enabled, Ambari will create a flow.xml.gz file that contains just the ambari reporting task. Later when this node tries to join the cluster, the flow.xml.gz files between this new node and the cluster will not match. This mis-match will trigger the new node to fail to join cluster and shut back down. In order to avoid this problem the flow.xml.gz file must be copied from one of the cluster's existing nodes to this new node. STEP 5: Start NiFi on this new node. After the node has started, it should successfully join your existing cluster. If it fails, the nifi-app.log will explain why, but will likely be related to one of the above configs not being cleared out causing the users.xml and authorizations.xml files to get generated rather then inherited from the cluster. If that is the case you will need to fix the configs and delete those files manually before restarting the node again. STEP 6: While you cluster is now up and running with the additional node, but you will notice you cannot open the UI of that new node without getting an untrusted proxy error screen. You will however still be able to access your other two node's UIs. So we need to authorize this new node in your cluster. A. If NiFi handles your authorizations, follow this procedure: 1. Log in to the UI of one of the original cluster nodes. The "proxy user requests" access policies is needed to allow users to access the UI of your nodes. NOTE: There may be additional component level access policies (such as "view the data" and "modify the data") you may also want to authorize this new node for. B. If Ranger handles your NiFI authorizations, follow this procedure: 1. Access the Ranger UI: 2. Click Save to create this new user for your new node. Username MUST match exactly with the DN displayed in the untrusted proxy error screen. 3. Access the NiFi service Manager in Ranger and authorize your new node to your existing access policies as needed: You should now have a full functional new node added to your pre-existing secured NiFi cluster that was deployed/installed via Ambari.
... View more
Labels:
01-27-2017
07:54 PM
1 Kudo
The only time it would not be the full DN is if you configured pattern mapping in your nifi.properties file.
... View more
01-27-2017
07:39 PM
1 Kudo
@Sunile Manjee
The users.xml file you have above was not generated by NiFi. Did you manually create that? You should not need to do that. On First start of NiFi after enabling https, NiFi will generate both the users.xml and authorizations.xml files from the configurations in the authorizers.xml file. If the users.xml and authorizations.xml files already exist, NiFi will not modify them or re-create them during startup even if you change the configurations in the authorizers.xml file for "Initial Admin Identities" or "node identities". In order to to have NiFi create those files over, you will need to remove or rename the current users.xml and authorizations.xml files before restarting NiFi. Thanks, Matt
... View more
01-26-2017
05:36 PM
1 Kudo
@srini
The default "FlowFIle Policy" is "use clone" within the advanced UI. For an "and" operation, you will want change that to "use original". Just keep in mind (does not apply here since both rules are unique) that if more then one rule tries to perform the exact same "Actions", only the last rule in the list that will be applied to the original FlowFile on output. Both "Rules" are run against every FlowFile that passes through this UpdateAttribute processor, so it will work in cases where only one or the other is set and cases where both are not set. Rules are operated on in the order they are listed. You can drag rules up and down in the list to order them how you like. If this answer addresses your question, please click "accept". Thank you, Matt
... View more
01-26-2017
04:05 PM
1 Kudo
@Eric Lloyd Did you follow the HDF install guide which includes installing the HDF mpack?
http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.1/bk_dataflow-ambari-installation/content/index.html NiFi will not show up as a service under the default HDP mpack. Thanks, Matt
... View more
01-26-2017
02:27 PM
@Karthik Narayanan The only problem with the "replaceNull" function is that it will only return either "nograde" or "nobranch" if the incoming FlowFile does not have have an attribute with the name "grade" or " branch". If they exist but have no value set, the result will still be set to no value instead of being assigned "nograde" or "nobranch".
... View more
01-26-2017
02:16 PM
5 Kudos
@srini
NULL and empty are two different things and need to handled in two different ways. NULL indicates there is no value set, while EMPTY means a value of type new line, carriage return, tab, or space has been set as the value. There are a number of NiFi Expression language functions that work with non-exist attributes and attributes with either null or empty values. First question to ask is whether every FlowFile has both the branch and grade attributes? If some are missing the attributes, different handling needs to be done. If you don't know for sure that every FlowFile will have these attributes created, I suggest using the "isEmpty" NiFi expression language function to check. The "isEmpty" function will return true if the attributes does not exist, exist containing no value or exist containing only new line, carriage return, tab, or space.
The UpdateAttribute has an "Advanced" UI that allows you to create if/then type functions. So With the above rules applied, all FlowFile leaving this UpdateAttribute processor will have both the "branch" and "grade" attributes. Those attributes will either have some previously assigned non-empty value or the "nobranch" and/or "nograde" values assigned to them. This works even if the incoming FlowFile was missing these attributes. Thanks, Matt
... View more
01-25-2017
04:54 PM
@Michael Rivera
NiFi is designed to accept many triggers words sec, second, secs, seconds, min, minutes, mins, hr, hrs , day, days, etc.... The max it will accept as of NiFi 1.x is week or weeks (or wk or wks). If you enter an invalid trigger word, the processor will let you know it is invalid. Such as trying to use month or year will produce the below: Keep in mind that by using the "timer driven" scheduling strategy you are not setting a specific execution time. You are setting an execution interval where the first interval is scheduled upon start of the processor. The second execution will occur x amount of configured "run schedule" later. If you stop and then start the processor again, the interval starts over. The "CRON Driven" scheduling strategy allow you to configure an exact time(s) for execution. Thanks, Matt
... View more