Member since
09-29-2015
871
Posts
723
Kudos Received
255
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4275 | 12-03-2018 02:26 PM | |
| 3224 | 10-16-2018 01:37 PM | |
| 4336 | 10-03-2018 06:34 PM | |
| 3194 | 09-05-2018 07:44 PM | |
| 2442 | 09-05-2018 07:31 PM |
06-21-2017
06:49 PM
InvokeHttp to make your first request SplitJson on $. to get a flow file for each host EvaluateJsonPath with host = $.host to extract the host field from the JSON to a flow file attribute UpdateAttribute with request = {"specification":["template",{"service_description":"Filesystem /","site":"prod","graph_index":0,"host_name":"${host}"}],"data_range":{"time_range":[1491174000,1491260340]}} InvokeHttp
... View more
06-21-2017
04:48 PM
Java's service loader is non-deterministic, one node might find the processors from standard NAR, another node might find them from your NAR. Even the node that is currently working might change behavior between restarts of NiFi.
... View more
06-21-2017
04:34 PM
1 Kudo
NAR #1 shows an old standard processors JAR being included: nifi-standard-processors-0.0.1-incubating.jar I believe that could be a problem, you don't want the processors from another NAR included in your NAR. Basically this means the standard processors are included twice, once from the standard NAR and once from your custom NAR #1. When NiFi starts it scans the classpath and can sometimes find them first from standard NAR, or first from your NAR. If it finds them first from your NAR then it will load them from there which results in attempting to make API calls on them that are no longer compatible with 1.1.0.
... View more
06-21-2017
03:42 PM
Can you provide a listing of the JARs included in your custom NAR? ls -l work/nar/extensions/<YOUR-NAR>-unpacked/META-INF/bundled-dependencies/ And do this for each custom NAR.
... View more
06-21-2017
01:47 PM
2 Kudos
There are two solutions that would work well here... 1) Have the the import process distribute the files evenly to all the NiFi nodes, then each NiFi node doesn't have to worry about anything and just processes the files on the local file system of that node. I think this is what you meant in #3. 2) Mount a shared network drive to all the nodes, upload the files to the shared drive, then use ListFile running on primary node only to list the remote directory, followed by an Remote Process Group to distribute the listings to all the NiFi nodes, then a FetchFile for each node to retrieve the listings. More details on the List+Fetch pattern are here: https://community.hortonworks.com/articles/16120/how-do-i-distribute-data-across-a-nifi-cluster.html
... View more
06-19-2017
01:58 PM
1 Kudo
This type of information is typically stored in provenance data... You can use the SiteToSiteProvenanceReportingTask to get access to provenance events in JSON format and then filter the events to find the ones you are interested in. Each provenance event should have an event time which is the time the event was reports, as well as the lineage start time which is the time of the first event in the given lineage. So event time - lineage start time would be the time it took to get to current event.
... View more
06-14-2017
09:02 PM
3 Kudos
Why not use ExecuteStreamCommand to execute a local shell script that then SSH's to the remote machine and executes the Python script? For what its worth, ExecuteScript does allow incoming flow files... The script is passed a session object which would then have to use the session to get a flow file, operate on it, and transfer it. ExecuteProcess doesn't allow incoming flow files so maybe you meant that one? With ExecuteProcess I think you could make the command "ssh" and the arguments be the path to the script on the remote machine.
... View more
06-14-2017
06:34 PM
Also, those WARN messages above are actually OK and are not the cause of the problem. There should be some ERROR logs further down towards the bottom on the node that was started with the new version, it will most likely say something about "Uninheritable flow".
... View more
06-14-2017
06:31 PM
1 Kudo
Currently a rolling upgrade is not supported and you'll have to stop all nodes and start them back up with 1.2.0.3.0.0 at the same time. If that still doesn't work please report back.
... View more