Member since
09-29-2015
871
Posts
723
Kudos Received
255
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3348 | 12-03-2018 02:26 PM | |
2302 | 10-16-2018 01:37 PM | |
3615 | 10-03-2018 06:34 PM | |
2392 | 09-05-2018 07:44 PM | |
1814 | 09-05-2018 07:31 PM |
08-31-2016
12:44 PM
2 Kudos
The NiFi UI has a Summary page from the top-right menu, as well as stats on each processor by right-clicking and selecting Status History. Both of those views show things like bytes read/written, flow files in/out, etc. If you want to track this information somewhere else, you can implement a custom ReportingTask to send this information somewhere. This is how Ambari is able to display the stats/graphs, there is an AmbariReportingTask: https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-ambari-bundle/nifi-ambari-reporting-task/src/main/java/org/apache/nifi/reporting/ambari/AmbariReportingTask.java
... View more
12-09-2016
07:19 AM
@Matt Thank you, It works now!
... View more
08-30-2016
09:52 PM
@mclark @Bryan Bende I tried the same thing , I had the List-->Fetch-->Output port inside a process group on remote machine. My local NiFi was not able to find that port , But when I removed it from the process group and copied on to main canvas then my local NiFi was able to find it. Thank you both.
... View more
10-12-2016
06:04 PM
The mergeContent Processor simply bins and merges the FlowFiles it sees on an incoming connection at run time. In you case you want each bin to have a min 100 FlowFiles before merging. So you will need to specify that in the "Minimum number of entries" property. I never recommend setting any minimum value without also setting the "Max Bin Age" property as well. Let say you only ever get 99 FlowFiles or the amount of time it takes to get to 100 exceeds the useful age of the data being held. Those Files will sit in a bin indefinitely or for excessive amount of time unless that exit age has been set. Also keep in mind that if you have more then one connection feeding your mergeContent processor, on each run it looks at the FlowFiles on only one connection. It moves in round robin fashion from connection to connection. NiFi provides a "funnel" which allows you to merge FlowFiles from many connections to a single connection. Matt
... View more
08-23-2016
01:14 PM
Ah right, I'm used to setting this up with ListHDFS + FetchHDFS which is different because its a shared resource... You are right that it is not going to work correctly when it is not a shared location because another node can't fetch a file that is only on primary node. Sorry about the confusion.
... View more
08-23-2016
04:41 PM
Glad to hear it!
... View more
09-29-2017
03:20 PM
Hi, Hanu V Can u please share the attributes example or flow file as an example to explain how extract text to assign the entire row as an attribute ??? I am searching for it from a while.
... View more
08-18-2016
01:42 PM
i found the problem. Reason was the hbase, i am sending same values as key so, it is impossible. After changing key values, everything working fine. Thanks
... View more
08-17-2016
01:43 AM
1 Kudo
@Randy Gelhausen NiFi JIRA to capture this idea: https://issues.apache.org/jira/browse/NIFI-2585
... View more
08-16-2016
01:44 PM
3 Kudos
I think it depends what you mean by "schedule it to run every hour"... NiFi itself would always be running and different processors can be scheduled to run according to their needs. Every processor supports timer based scheduling or cron based scheduling, so using either of those you can set a source processor to run every hour. You could also use the REST API to start and stop processors as needed, anything you can do in the UI can be done through the REST API. For best practices for upgrading NiFi see this wiki page: https://cwiki.apache.org/confluence/display/NIFI/Upgrading+NiFi Deploying changes to production, there are a couple of approaches, one of them is based around templates: https://github.com/aperepel/nifi-api-deploy Some people also just move the flow.xml.gz from one environment to another, but this assumes you have parametized everything that is different between environments.
... View more