Member since
09-29-2015
871
Posts
721
Kudos Received
255
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2616 | 12-03-2018 02:26 PM | |
1723 | 10-16-2018 01:37 PM | |
3106 | 10-03-2018 06:34 PM | |
1853 | 09-05-2018 07:44 PM | |
1462 | 09-05-2018 07:31 PM |
04-10-2017
03:50 PM
1 Kudo
Ok nifi.remote.input.host should not be the value of a remote server, it should either be blank just like the web host which means it will bind to all interfaces, or it should be a specific hostname/ip of the current node that you are editing nifi.properties on. Here is how it works, lets say you have a standalone NiFi instance with a RPG trying to send data to a 3 node cluster... In the RPG you put the URL of the target cluster's UI like http://somehost:8080/nifi, the RPG then makes a REST call to http://somehost:8080/nifi to ask it for the information about all the nodes in the cluster, the response contains the value of nifi.remote.input.host and nifi.remote.input.socket.port for each of the three nodes in the destination cluster, so now the source instance knows where to send data two.
... View more
04-10-2017
03:13 PM
3 Kudos
The templates directory is left-over from old versions of NiFi (0.x).. in the 1.x line all templates are stored with in flow.xml.gz in order to inherit the same security model as other components when running in a secure instance. When you upload a template in 1.x, you are doing so through the context palette on the left which is uploading it into the current processor group you are in (or the root group if on the root canvas) and if you create a new template it is being created in the process group you are in.
... View more
04-10-2017
03:05 PM
1 Kudo
@Michael Silas Once you have a running cluster you shouldn't have to modify the authorizers.xml, authorizations.xml, and users.xml to add a new node. There are two different ways you could do it.. Approach #1 1) Generate a cert for your new node 2) Go to your existing cluster and using the UI, add a new user with the DN from the cert for the new node 3) Grant the new user the policy for "proxy requests" 4) On the new node, leave the initial admin and all node identities blank, then start this node and it will since it will have 0 users and 0 policies and no flow, it will inherit everything from the cluster. Approach #2 1) Generate a cert for your new node 2) On the new node make the authorizers.xml exactly the same as the existing cluster... meaning if you had a 3 node cluster and you are adding the fourth node, put only the 3 existing nodes as the identity and the same initial admin, this way it generates exactly the same users and policies as the running cluster which is required for it to join. 3) At this point you should be able to start the new node and have it join the cluster 4) Go into the UI and add the user for the new node and add the user to "proxy requests" policy This blog post describes approach #2: https://pierrevillard.com/2016/11/30/scaling-updown-a-nifi-cluster/ Overall, in order to join the cluster a new node needs one of the following conditions: - The exact same users, groups, policies, and flow as the cluster - No users, no groups, no policies, and no flow, in which case it will inherit everything from the cluster
... View more
04-07-2017
08:19 PM
Sorry I may have just mis-understood the information... In all of the logs it showed foo.xyz.abc.com and then for nifi.remote.input.host you said it was 192.168.x.x. I realize both are obfuscated, but I was assuming these were different values, otherwise I thought you would have put foo.xyz.abc.com for both of them. If nifi.web.http.host and nifi.remote.input.host are the same in your nifi.properties, then you can ignore my previous comment.
... View more
04-06-2017
08:18 PM
Nothing is really jumping out at me, it feels like something isn't configured correctly, but I'm not sure what. Have you tried making nifi.remote.input.host the same value as the hostname you are using for the UI? Just wondering since that was a 192 address and the web host looks like a hostname. The web host and s2s host can be different, but just wondering if there is some issue using the 192 address in this case.
... View more
04-06-2017
04:34 PM
Thanks for uploading the screenshots. I can see in the code that segment.original.filename is specifically removing the extension and this appears to have been like this since the initial code for NiFi was open-sourced, so I'm not sure if this is considered a bug or really a preference. The path attribute is being updated to reflect the path within the archive, although I believe there could be a bug here, but I believe it makes sense since the path of the children is not necessarily the path of the original flow file. In the short-term, I think the easiest thing to do is stick an UpdateAttribute processor right before UnpackContent and add two properties that copy the filename and path to new attributes like this: archive.filename = ${filename} archive.path = ${path} The flow files for the unpacked files should retain these attributes.
... View more
04-04-2017
09:09 PM
Ok, are you doing RAW site-to-site or HTTP-based (this is an option in the RPG)? And what do these properties look like in your nifi.properties? # Site to Site properties nifi.remote.input.host= nifi.remote.input.secure=false nifi.remote.input.socket.port= nifi.remote.input.http.enabled=true nifi.remote.input.http.transaction.ttl=30 sec
... View more
04-04-2017
04:32 PM
3 Kudos
I assume that when you access the NiFi UI in your browser you are going to http://foo.xyz.cequintecid.com:8080/nifi and that works fine right? Can you perform the GenerateFlowFile -> RPG to self test and get the full stacktrace from nifi-app.log when the error happens. Thanks.
... View more
04-04-2017
03:05 PM
Ok what are the exact attribute names that you see in the queue going into UnpackContent that are being lost?
... View more
04-04-2017
02:39 PM
Can you elaborate more on where you see the source path and where it is getting dropped? Going into FetchHDFS there should be a flow file with the content being a path to fetch like /data/foo.zip, after FetchHDFS it wrote the content of foo.zip to the flow file content and the filename attribute of the flow file should be foo.zip, then it goes to UnpackContent which produced multiple child flow files that were unpacked and each one should have segment.original.filename with foo.zip. Are you asking to retain the original HDFS path that went into FetchHDFS?
... View more