Member since
07-30-2019
3436
Posts
1633
Kudos Received
1012
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 160 | 01-27-2026 12:46 PM | |
| 583 | 01-13-2026 11:14 AM | |
| 1293 | 01-09-2026 06:58 AM | |
| 1045 | 12-17-2025 05:55 AM | |
| 509 | 12-17-2025 05:34 AM |
03-20-2019
02:02 PM
@Matt Clarke,@matt burgess Exactly the second point is happening, each node is generating its own value incrementing from last value it has stored in its local state. So which processor or method should i use to generate an incremental batchid (batch1,batch2...so on) since update attribute is messing values when running on cluster. or is there any property by which updateattribute processors on all nodes can pickup each others's last state variable?..please suggest
... View more
05-08-2019
02:37 PM
@Mahadevan Ganesan Could possibilities come to mind here: 1. Based on your screenshot provided, you have configured a Max Bin Age of 2 minutes. The timer on a Bin starts when the first FlowFile has been allocated to that bin. It is possible that your bin is being merged at 2 minutes when it still has fewer than the desired 100 FlowFiles allocated to it yet. Try setting Max Bin Age to a higher value and see what the results are. 2. You have 10 allocated bins, but have possibly 10 or more unique schemas being used by your JsonTreeReader. Since only FlowFiles with like schemas will be allocated to the same bin. If when allocating a new FlowFile to a bin and that new FlowFile does match any existing bin, it will be allocated to a new bin. if no free bins exist, the oldest bin will get merged to free a bin. Verify all 100 source FlowFiles use exact same schema. Thanks, Matt
... View more
02-27-2019
02:32 AM
Thank you Matt. Now I understand that we can't replay at GetFile as it has no inbound connection to insert a FlowFile. As for PutSFTP replay, initially I routed "failure" & "reject" relationships to LogAttribute, that's why I didn't see any "DROP" event in PutSFTP to replay. So, I changed PutSFTP to auto terminate "failure", simulated intermittent connection failure, and then I could replay the "DROP" event which resulted in a successful "SEND" event.
... View more
02-21-2019
10:39 PM
@Matt Clarke: Thank you for your detailed explanation!
... View more
02-19-2019
06:05 PM
Thanks @Matt Clarke. I went with the option 2 and it worked. Thanks again for the quick reply.
... View more
02-06-2019
03:53 PM
Thanks for your response Matt. It is working now with updateattribute processor as it's attribute level. Thumbs up to you
... View more
12-13-2018
02:13 PM
2 Kudos
@Allan Batista - Each NiFi node in your 10 node cluster is running with its own copy of the flow.xml and operating against its own set of FlowFiles. Concurrent tasks setting on a component (for example processor) is applied to every node. In order for a NiFi node to execute a component it must have the ability to request a thread from the controller, so it must have at least 1 concurrent task. So setting 1 concurrent task (default) on a processor means that every node can run that processor with one concurrent task (so 10 potential threads in operation across cluster). Threads are not shared between nodes. - Also keep in mind that setting 6 concurrent tasks on a single component (60 equivalent across c10 node cluster) does not mean that that component on each node will always use all 6. If the volume of data being processed by the component does not warrant using more then 1 or 2 concurrent tasks, the other concurrent tasks will not be utilized. - This article may help here: https://community.hortonworks.com/articles/221808/understanding-nifi-max-thread-pools-and-processor.html - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
12-03-2018
04:13 PM
@Gillu
Varghese
A few questions: 1. Are you sure all 136 files are reaching the MergeContent processor's inbound connection within 5 minutes? The bin age starts when very first FlowFile is added to a bin. At 5 minutes from that point the bin will be merged even if not all 136 have arrived. 2. Is your NiFi a cluster or standalone instance of NiFi? If cluster, are all 136 FlowFiles on same NiFi node? Each node in a cluster can only merge FlowFiles residing on same Node. There is a new load balanced connection feature in NiFi 1.8 that can help here if this is the case. https://blogs.apache.org/nifi/entry/load-balancing-across-the-cluster - Try setting your max bin age to a much higher value and see what results you see. - Thank you, Matt
... View more
11-15-2018
01:54 PM
Is the destination NiFi of your ReprotingTask a NiFi cluster or Single NiFi instance? On the NiFi instance the RPG is pointing at, what is configured in the following property: nifi.remote.input.host
... View more
11-12-2018
06:08 PM
3 Kudos
NiFi Restricted components are those processors, controller services, or reporting tasks that have the ability to run user-defined code or access/alter localhost filesystem data. - The NiFi User guide explains this as follows: ----------------------------------------- Restricted components will be marked with a icon next to their name. These are components that can be used to execute arbitrary unsanitized code provided by the operator through the NiFi REST API/UI or can be used to obtain or alter data on the NiFi host system using the NiFi OS credentials. These components could be used by an otherwise authorized NiFi user to go beyond the intended use of the application, escalate privilege, or could expose data about the internals of the NiFi process or the host system. All of these capabilities should be considered privileged, and admins should be aware of these capabilities and explicitly enable them for a subset of trusted users. Before a user is allowed to create and modify restricted components they must be granted access. ------------------------------------------ Users can only be restricted from adding such components in NiFi if NiFi has to be secured. Users of an unsecured NiFi will always have access to all components. - Prior to HDF 3.2 or Apache NiFi 1.6, all restricted components were covered by a single authorization policy: Ranger Policy (Base policies): NiFi Policies (Hamburger menu) Ranger permissions description: /restricted-components Access restricted components Read/View - N/A Write/Modify - Gives granted users the ability to add components to the canvas that are tagged as “restricted” - It was decided that lumping all components into one policy was not ideal. So NIFI-4885 was created to address this so that users' access to restricted components would be based on the level of restricted access they are being granted. read-filesystem read-distributed-filesystem write-filesystem write-distributed-filesystem execute=code access-keytab export-nifi-details - In order to avoid backward compatibility issues when users upgrade to a HDF 3.2+ or Apache NiFi 1.6.0+, the “Access restricted components” base policy still exists and defaults to "regardless of restrictions". In the NiFi global “Access Policies” UI, this is the default policy and is depicted as follows: In Ranger, this is still associated with just the “/restricted-components” policy. The four new policies are depicted as follows in Ranger and NiFi UIs: - Ranger Policy (Base policies): NiFi Policies (Hamburger menu) Ranger permissions description: /restricted-components/read-filesystem Access restricted componentsSub policy:Requiring ‘read filesystem’ Read/View - N/A Write/Modify - Allows users to create/modify restricted components requiring read filesystem. /restricted-components/read-distributed-filesystem Access restricted componentsSub policy:Requiring ‘read distributed filesystem’ Read/View - N/A Write/Modify - Allows users to create/modify restricted components requiring read distributed filesystem. /restricted-components/write-filesystem Access restricted componentsSub policy:Requiring ‘write filesystem’ Read/View - N/A Write/Modify - Allows users to create/modify restricted components requiring write filesystem. /restricted-components/write-distributed-filesystem Access restricted componentsSub policy:Requiring ‘write distributed filesystem’ Read/View - N/A Write/Modify - Allows users to create/modify restricted components requiring write distributed filesystem. /restricted-components/execute-code Access restricted componentsSub policy:Requiring ‘execute code’ Read/View - N/A Write/Modify - Allows users to create/modify restricted components requiring read filesystem. /restricted-components/access-keytab Access restricted components Sub policy:Requiring ‘access keytab’ Read/View - N/A Write/Modify - Allows users to create/modify restricted components requiring read filesystem. /restricted-components/export-nifi-details Access restricted components Sub policy:Requiring ‘export nifi details’ Read/View - N/A Write/Modify - Allows users to create/modify restricted components requiring read filesystem. - Below is a list of restricted components for each of the above sub-policies (current as of CFM 2.1.1 and Apache NiFi 1.13): Read-filesystem: NiFi component: Component type: Access provisions: FetchFile Processor Provides operator the ability to read from any file that NiFi has access to. TailFile Processor Provides operator the ability to read from any file that NiFi has access to. GetFile Processor Provides operator the ability to read from any file that NiFi has access to. - Read-Distributed-Filesystem: (Added NiFi 1.13) NiFi component: Component type: Access provisions: FetchHDFS Processor Provides operator the ability to retrieve any file that NiFi has access to in HDFS or the local filesystem. FetchParquet Processor Provides operator the ability to retrieve any file that NiFi has access to in HDFS or the local filesystem. GetHDFS Processor Provides operator the ability to retrieve any file that NiFi has access to in HDFS or the local filesystem. GetHDFSSequenceFile Processor Provides operator the ability to retrieve any file that NiFi has access to in HDFS or the local filesystem. MoveHDFS Processor Provides operator the ability to retrieve any file that NiFi has access to in HDFS or the local filesystem. - Write-filesystem: NiFi component: Component type: Access provisions: FetchFile Processor Provides operator the ability to delete any file that NiFi has access to. GetFile Processor Provides operator the ability to delete any file that NiFi has access to. PutFile Processor Provides operator the ability to write to any file that NiFi has access to. - Write-Distributed-Filesystem: (Added NiFi 1.13) NiFi component: Component type: Access provisions: DeleteHDFS Processor Provides operator the ability to delete any file that NiFi has access to in HDFS or the local filesystem. GetHDFS Processor Provides operator the ability to delete any file that NiFi has access to in HDFS or the local filesystem. GetHDFSSequenceFile Processor Provides operator the ability to delete any file that NiFi has access to in HDFS or the local filesystem. MoveHDFS Processor Provides operator the ability to delete any file that NiFi has access to in HDFS or the local filesystem. PutHDFS Processor Provides operator the ability to delete any file that NiFi has access to in HDFS or the local filesystem. PutParquet Processor Provides operator the ability to write any file that NiFi has access to in HDFS or the local filesystem. - Execute-code: NiFi component: Component type: Access provisions: ScriptedReportingTask Reporting Task Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. ScriptedLookupService Controller Service Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. ScriptedReader Controller Service Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. ScriptedRecordSetWriter Controller Service Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. ExecuteFlumeSink Processor Provides operator the ability to execute arbitrary Flume configurations assuming all permissions that NiFi has. ExecuteFlumeSource Processor Provides operator the ability to execute arbitrary Flume configurations assuming all permissions that NiFi has. ExecuteGroovyScript Processor Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. ExecuteProcess Processor Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. ExecuteScript Processor Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. ExecuteStreamCommand Processor Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. invokeScriptedProcessor Processor Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has. - access-keytab: NiFi component: Component type: Access provisions: KeytabCredentialsService Controller Service Allows user to define a Keytab and principal that can then be used by other components. - Export-nifi-details: NiFi component: Component type: Access provisions: SiteToSiteBulletinReportingTask Reporting Task Provides operator the ability to send sensitive details contained in bulletin events to any external system. SiteToSiteProvenanceReportingTask Reporting Task Provides operator the ability to send sensitive details contained in Provenance events to any external system. - ***Note: Some components may be found under multiple sub-policies above. In order for a user to utilize that component, they must be granted access to every sub policy required by that component. - Exceptions in HDF 3.2 and Apache 1.7 and 1.8: In order to use the following components, users must have full access to all restricted components policies: NiFi component: Component type: Access provisions: PutORC Processor This component requires access to restricted components regardless of restriction. Apache Jira: NIFI-5815 - A full breakdown of all other NiFi Policies can be found here: NiFi Ranger based policy descriptions - Cloudera Community
... View more
Labels: