Member since
07-30-2019
3135
Posts
1565
Kudos Received
909
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
158 | 01-09-2025 11:14 AM | |
897 | 01-03-2025 05:59 AM | |
442 | 12-13-2024 10:58 AM | |
497 | 12-05-2024 06:38 AM | |
392 | 11-22-2024 05:50 AM |
01-03-2017
02:46 PM
@Aman Jain
If you found this information helpful,please accept the answer.
... View more
01-02-2017
01:04 PM
@amanjain The FlowFile would be rooted to the failure relationship in both those case. Those FlowFiles would be penalised based on the penalty duration configured on the fetchSFTP processor (default of 30 secs). That FlowFile will not be processed by the processor it is connected to until that penalty has expired. The common scenario her is to have the failure relationship loop back on the fetchSFTP processor so after penalty has expired another attempt will be made to retrieve the data. Matt
... View more
12-21-2016
07:00 PM
@Sunile Manjee Also keep in mind that NiFi Content archiving is enabled by default with a retention period of 12 hours or 50% disk utilization before the archived content is removed/purged. The purging of FlowFile manually within your dataflow will not trigger the deltion of archived FlowFiles.
... View more
12-20-2016
04:47 PM
1 Kudo
@Ahmad Debbas FlolwFiles generated from the GetHDFS processor should have a "path" attribute set on them:
The path is set to the relative path of the file's directory on HDFS. For example, if the Directory property is set to /tmp, then files picked up from /tmp will have the path attribute set to "./". If the Recurse Subdirectories property is set to true and a file is picked up from /tmp/abc/1/2/3, then the path attribute will be set to "abc/1/2/3". Since it is only the relative path and not an absolute path, you would need to use an UpdateAttribute processor to prepend the configured directory path the that relative path if you need the absolute path for use later in your flow. Thanks, Matt
... View more
12-20-2016
03:36 PM
@D'Andre McDonald The Get based processors will create a "absolute.path" FlowFile attribute on all Files that are ingested in to NiFi. So you would configure your Get processor to point at the base directory and consume files from all subdirectories. The Put based processors support expression language in the "remote path" property. So you can use any attribute on the FlowFile to specify what path the file will be written to on the put. So here you could use ${absolute.path} as the value for this property. The Put based processors also have a property for "create directory" which you can set to true. Thank you, Matt
... View more
12-15-2016
01:22 PM
2 Kudos
@NAVEEN KUMAR
One suggestion might be use a ListFile processor configured to run on cron schedule. You could then feed the success from that processor to MonitorActivity processor. The inactive relationship of this processor could be routed to a putEmail processor. So lets say you have you list file configured to run every 3 minutes based on a cron. You could set your threshold in the MonitorActivity processor to 3 minutes with a setting of "continually send message" set to true. With the inactive relationship routed to putEmail, you will get an email every 3 minutes if the listFile produced no new files. you could also route the activity.restored relationship to a PutEmail processor if you want to be notified if file where seen following a period of no activity. Thanks, Matt
... View more
12-14-2016
10:51 PM
1 Kudo
@Sunile Manjee FlowFile Content is stored in claims inside the content repo. Each claim can contain the content from 1 or more FlowFiles. A claim will not be moved to content Archive or purged from the content repository until all active FlowFiles in your dataflow that have references to any of the content in that claim have been removed. Those FlowFiles can be removed via manual purging of the queues (Empty Queue), Flow file expiration on a connection or via auto-termination at the end of a dataflow.
The FlowFile count and size reported in the UI does not reflect the size of the claims the content repo. Those stats report the size and number of active FlowFiles queued in your flow. It is very likely and usual to see the size reported in the UI to differ from actual disk usage.
Thanks, Matt
... View more
12-12-2016
01:30 PM
2 Kudos
@Piyush Routray Not sure I am clear with what you mean by "I intend to have a separate NiFi cluster than the HDF cluster". Are you installing just NiFi via command line? - You can install NiFi using command line and utilize the embedded zk. http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.1/index.html When you get to the download HDF section of the "Command Line Installation" documentation, go to the bottom of the list to download just the NiFi tar.gz file. The relevant docs for this are found here: http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.1/bk_administration/content/clustering.html http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.1/bk_administration/content/state_providers.html Are you trying to install NIFi via HDF's Ambari? - The Ambari based installation of HDF will install an external ZK for you and setup NiFi to use it for you. Thanks, Matt
... View more
12-09-2016
02:10 AM
@pholien feng before a user can access the UI, that user must have the "view the interface" policy granted for them. This policy is added through the global policies UI found under the hamburger menu located in the upper right corner. I see that step is missing in the above answer. Sorry about that. Matt
... View more
12-08-2016
07:05 PM
1 Kudo
@Michael Young HDF NiFi at its core is designed to be very lightweight; however, how powerful a host/node that HDF NiFi needs to be deployed on really depends on the complexity of implemented dataflow and the throughput and data volumes that dataflow will be handling. HDF NiFi may be deployed at the edge, but usually along with those Edge deployments comes a centralized cluster deployment that runs a much more complex dataflow handling data coming from the edge NiFis as well as many other application sources. Thanks, Matt
... View more