Member since
07-30-2019
3466
Posts
1641
Kudos Received
1015
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 401 | 03-23-2026 05:44 AM | |
| 308 | 02-18-2026 09:59 AM | |
| 557 | 01-27-2026 12:46 PM | |
| 977 | 01-20-2026 05:42 AM | |
| 1287 | 01-13-2026 11:14 AM |
02-05-2019
07:07 PM
1 Kudo
@Venkatesh AV - Just want to make sure we are using correct processor for what you want to do: The ReplaceText processor is used to replaceText in the content of the FlowFile. The UpdateAttribute processor could be used to replace text contained within an attribute of a FlowFile. - Assuming the FlowFile content is where you want to replace double quote with single quote, the following ReplaceText processor configuration will do that for you: - - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
01-23-2019
03:06 PM
1 Kudo
@john y There are a core set of attributes that will exist on all FlowFiles: 1. entryDate 2. lineageStartDate 3. fileSize 4. uuid 5. filename 6. path - The first four cannot be changed by users. filename and path can have their values edited by users via something like UpdateAttribute processor. - You can insert a logAttribute processor anywhere in your flow to output the key/value attribute map for FlowFiles that pass through it to the nifi-app.log. Just keep in mind that leaving this processor in your flow will result in potentially a lot of log output. - Thanks, Matt
... View more
01-16-2019
04:25 PM
1 Kudo
@Michael Vikulin - Your nifi.properties file is configured to look for an Authoriuzer with the identifier Managed-authorizer. nifi.security.user.authorizer=managed-authorizer The shared authorizers.xml does not contain a "managed-authorizer". If you want to use the "file-provider" you need to update your nifi.properties file. - I also see that you are using ldap-provider for logging in to your NiFi. It is configured with: <propertyname="Identity Strategy">USE_USERNAME</property> This means that whatever string the user enters in the username login box will be parsed by any configured Identity.mapping.pattens configured in nifi.properties file and then resulting value string passed to authorizer. - So even once you fix your auithorizer.xml or nifi.properties file, You are likely going to send "admin" to your authorizer rather then the admin user's full DN. - Thanks, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
01-14-2019
02:10 PM
1 Kudo
@Mr Anticipation - *** Community Forum Tip: Try to avoid starting a new answer in response to an existing answer. Instead use comments to respond to existing answers. There is no guaranteed order to different answer which can make it hard following a discussion. - 1. NiFi and NiFi-registry are two totally different pieces of software. Each of these services are likely running as different service users. HDF service user defaults: NiFi service --- default service user is "nifi" NiFi Registry service ---> default service user is "nifiregistry" - 2. The NiFi service is where you are building your dataflows on the canvas. The NiFi-Registry service is used to store version controlled dataflows from your NiFi. - 3. Make sure that the directory you are trying to ingest files is accessible by the nifi service user. Suggest accessing server via command line and becoming the nifi service user (#sudo su - nifi) and then navigate to the target directory cd /home/xxx/receive. Keep in mind that even though the "receive" directory may be set to 777, if the nifi service user can't access /home or /home/xxx they will not be able to see "/home/xxx/receive" regardless of what permissions are set on that directory. - Thank you, Matt
... View more
01-11-2019
03:20 PM
@Mr Anticipation - The ERROR says you have a permissions issue. The user who owns the NiFi java process does not have permissions to navigate down the path /home/xxx/receive and/or does not have permissions to files you want to ingest. - Ambari by default creates the "nifi" service user account which is used to run NiFi. As such, that "NiFi" user must have access to traverse that directory path and consume the target file(s). - following command can be used to see what user owns the two nifi processes. # ps -ef|grep -i nifi - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
01-03-2019
10:39 PM
1 Kudo
@Adam J The Remote Process Group (RPG) was not designed with any logic to make sure specific FlowFiles went to one node versus another. IT was designed to simply build a delivery model based on load on target NiFi cluster nodes. That delivery model will change potentially each time the lates cluster status is retrieved. - If you need to be very specific as to which node get a specific FlowFile, you best bet is to use a direct delivery dataflow design. The best option here is to have your splitText processor send to a routeOnContent processor that sends the split with URL 1/2 to one new connection and the flowfile with url 3/4 to another connection. Each of these connections would feed to a different postHTTP processor (this processor can be configured to send as flowfile). One of the would be configured to send to a listenHTTP processor on node 1 and the other configured to point at same listenHTTP processor on node 2. - You may want to think about this setup from a HA standpoint. If you lose either node 1 or 2, those flowfiles will just stack up and not transfer until the node is back online. at the same time the other urls continue to transfer. - Something else you may want to look into is the new load-balanced connections capability introduced in NiFi 1.8: https://blogs.apache.org/nifi/entry/load-balancing-across-the-cluster - There is a "Partition by Attribute" option with this new feature which would make sure flowfiles with matching attribute go to same node. While you still can't specify a specific node, it does allow similar flowfiles to get moved to same node. if node goes down you don't end up with an outage, but files with matching attributes will stay together going to different node that is still available. - Thanks, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
01-02-2019
10:42 PM
@Nimrod Avni The config.json generated as output when you stood up your NIFi CA (server) is there to simplify the execution of the client mode so that you do not have to manually pass all the server info to the client mode input. This was just a choice made by the development team to generate this file rather then just expect user to remember what they entered when the stood up the Server. You can delete this file if you want to as long as you have stored or can remember the pertinent information yourself for running the tls-toolkit client mode later. as far as client mode goes, the generated config.json is also just there to provide you the pertinent information about the client keystore that was created this is all information you should already know (unless you did not provide a password and toolkit auto-generated one for you which they you would need to get form the output config.json file.) - Thanks, Matt
... View more
01-02-2019
09:52 PM
1 Kudo
You can run the tls-toolkit in client mode directly from any node, but you will either need to provide the CA server info or copy CA config.json to each node manually. I was not trying to imply that you must execute client mode form same server where CA server was installed. - The NiFi CA was not built with the intent for use in a production environment. It was built as a tool that allows users to easily and quickly setup secured NiFi instances/clusters for development and testing purposes. For production environments a corporately/privately managed CA should be used. - There should only ever be one NiFi CA installed and being used to sign all certificates. I apologize if what i wrote was confusing and led you to believe multiple NiFi CAs were needed or should be used. - Feel free to open an Apache NiFi Jira to add the ability to update an existing or output a new nifi.properties file when client mode is used. I don't see that as a bad request at all. - Thank you, Matt
... View more
01-02-2019
08:54 PM
1 Kudo
@Nimrod Avni - The Standalone option is not ideal for setting up a NiFi cluster. Since the certificates generated are not signed by a Certificate Authority, the truststore will need to contain a trustedCertEntry for each certificate created. Adding additional nodes to a cluster would require going back and modifying the truststore on every node in the cluster. - The Client/Server mode allows you to standup a Certificate Authority (Server mode) that will be used to sign all the client certificate created (one for each NiFi node). When you stand up the Server a config.json is generated which can be used as input to the client mode operation. Because of this it is common that each of the client certificates are also generated from same server where the CA (server) was created/started. The client mode simply outputs a config.json file for each client certificate which simply provides the information needed to setup the relevant nifi.properties properties on each of your NiFi nodes. - It is safe to say that the structure will remain unchanged with a major NiFi release version. An external script could be used to update a nodes nifi.properties file from the output generated in the client mode config.json file. HDF for example already does this. If you choose to utilize the NiFi CA in HDF, it will take care of obtaining the client certificates and updating the nifi.properties on each node. This allows new client certificate to be generated on demand for each node. There is no option to configure NiFi to read these security parameters from the client mode generated config.json file. - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
01-02-2019
05:14 PM
@john y - The rest-api endpoint you are using is incorrect for instantiating an existing template on the canvas. You should instead be using a curl command that looks something like this: # curl 'http://localhost:8080/nifi-api/process-groups/<PROCESS GROUP UUID>/template-instance' -H 'Content-Type: application/json' --data-binary '{"templateId":"<THE_TEMPLATE_UUID>","originX":100,"originY":100,"disconnectedNodeAcknowledged":false}' —compressed - The rest-api endpoint contains the UUID of the process group in which you will be instantiating your template. You need to include a header like above that defines the content type and then provide "--data-binary" json that includes the template's UUID and the x coordinates on the graph where the template should be placed. - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more