Member since
07-30-2019
2906
Posts
1442
Kudos Received
844
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
58 | 04-17-2024 11:30 AM | |
66 | 04-16-2024 05:36 AM | |
38 | 04-15-2024 05:31 AM | |
121 | 04-03-2024 05:59 AM | |
135 | 04-02-2024 01:22 PM |
03-29-2024
07:13 AM
1 Kudo
@jame1997 My first question would be how you have your putSyslog processor configured? Are you using TCP or UDP. If you are using UDP there is not going to be any confirmed delivery. That is not a lossless protocol. While TCP does have confirmed delivery at the expense of speed. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-29-2024
07:06 AM
1 Kudo
@DeepakDonde https://issues.apache.org/jira/browse/NIFI-12513 does not mention GetHTTP processor, so you could certainly try that processor to see if you experience the same issue. Downgrade would lose all improvements and bug fixes introduced in Apache NiFi 1.25. Otherwise you could wait until 1.26 is released that contains the fix. The InvokeHTTP processor is part of the NiFi Standard nar which includes a lot of NiFi components. You could also try Downloading just the 1.24.0 standard nar from the maven central repository and adding it the extensions folder of your 1.25.0 NiFi. This would make both the 1.24 and 1.25 versions of many components available in your NiFi. You could then use the 1.24 version of the invokeHTTP over the 1.25 version that has issue. This would allow you to continue to use 1.25 version for all other components. While i have added multiple version of the same nar to my NiFi installations in the past, I have not done so with the standard nar. If you have issues, you can stop your NiFi, remove the added nar and restart so thing go back to previous. https://mvnrepository.com/artifact/org.apache.nifi/nifi-standard-shared-nar/1.24.0 https://repo1.maven.org/maven2/org/apache/nifi/nifi-standard-shared-nar/1.24.0/nifi-standard-shared-nar-1.24.0.nar Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-29-2024
06:51 AM
@s198 NiFi has no ability to merge files remotely. NiFi would need to consume all the files (ListHDFS --> FetchHDFS), then merge the content of those FlowFiles (MergeContent or MergeRecord), then use UpdateAttribute to set desired filename on merged file, and finally write the merged file back to HDFS using PutHDFS processor. If you are using a NiFi Cluster, you would need to do all this merging on one node of the cluster, NiFi nodes can only execute against the FlowFiles present on that one specific node. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-28-2024
10:26 AM
3 Kudos
@C1082 That ERROR has nothing to do with the community question you asked about: The error while accessing Nifi UI.
javax.net.ssl.SSLException: Connection reset Fixing the ERROR logged by the controller service DBCPConnectionPool shared in yoru last post will not resolve your UI access issue. Are you still having issue accessing the NiFi UI? If not, try searching for DBCPConnectionPool that is throwing this exception and verify its configuration and driver the user has configured it to use. You can find this specific NiFi controller Service by searching on it unique assigned ID: "8c23244e-6b42-38c5-aaf2-effc40ab1d4b". You'll want to make sure the driver still exists and the configured location and is owned and accessible by the NiFi service user. Sharing the exact SQL DB version and currently used database driver being used would also help here. Was this Controller Service working before the AKS version upgrade? Please help our community continue to thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-28-2024
10:10 AM
1 Kudo
@DeepakDonde The issue you are describing was caused by a change in Apache NiFi InvokeHTTP processor that tries to URL encode the URL entered. https://issues.apache.org/jira/browse/NIFI-12513 The fix for this is in https://issues.apache.org/jira/browse/NIFI-12785 which will be part of the Apache NiFi 1.26 and Apache NiFi 2.0.0-M3 releases. Since the change that caused this issue was added to Apache NiFi 1.25 and Apache NiFi 2.0.0-M2, you could use and earlier version like Apache NiFi 1.24 or Apache NiFi 2.0.0-M1 to get around the issue until the two above mentioned versions are released. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-28-2024
07:43 AM
@TreantProtector Everything the user adds to the canvas including controller service and reporting tasks are auto-saved in the flow.json.gz. Each time a change is made the current flow.json.gz is archived and new flow.json.gz is generated. Within the flow.json.g are all components (processors, connections, controller services, reporting tasks, funnels, process groups, ports, parameters, etc.) and their configurations. Any configuration property that is "sensitive" (passwords) are going to be encrypted in the flow.json.gz file. So in order to load that flow.json.gz in another NiFi, you would need to know the nifi.sensitive.props.algorithm and nifi.sensitive.props.key used by the original NiFi which it came from. Encrypted Passwords in Flows If you don't have that info, the flow.json.gz can still be loaded on another NiFi after manually editing the file to remove all the "enc{...}" values. Once flow.json.gz loads, an authorized user would need to re-enter all passwords in all components where it is needed via the NiFi UI. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-28-2024
07:27 AM
@C1082 The DBCPConnectionPool is a controller service that an end user would have added via the NiFi UI. The configuration of this controller service is done by the user and one of the properties specifies the user defined location of the Database Driver which the user must provide and is not included with NiFi. The Dataflow components added to the NiFi canvas have not relationship to UI access issues. The "javax.net.ssl.SSLException: Connection reset" exception when trying to access the UI is an issue with the TLS exchange between your client (browser) and NIFi. You'll need to look closer at the nifi-app.log and nifi-user.log for this exception and review the entire stack trace that goes with it. Without knowing the specific of your NiFi setup, I can't say whether your NiFi is enforcing a Mutual TLS exchange or only a one-way TLS exchange. A securely configured NiFi depending on configuration will either "REQUIRE" the client to provide a trusted clientAuth certificate in the TSL response or "WANT" a trusted clientAuth certificate in the response. A Connection Reset may happen if the TLS exchange was not successful which could be a trusted chain issue, network issue, or missing clientAuth certificate when NiFi configuration required it in the TLS response. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-28-2024
06:46 AM
@s198 Great to hear suggestions i provided solutions your question in this community question. We encourage our community members to start new threads for unrelated questions to avoid confusion on what solved the issue in a question remains clear to other community users that may come across this thread. That being said, my understanding of this new query is how you take a dataflow that starts from a single FlowFile produced by your squoop job that then becomes many FlowFiles, but requires only a single FlowFile post PutSFTP for downstream processing of job completion. That could be solved using the Wait and Notify processors which can be complicated to setup or using the "FlowFile Concurrency" capability on a Process Group. I shared a similar solution in a few other community post on how this works: https://community.cloudera.com/t5/Support-Questions/How-to-detect-all-branches-in-a-NiFi-flow-have-finished/m-p/383475#M244918 https://community.cloudera.com/t5/Support-Questions/NiFi-Trigger-a-Processor-once-after-the-Queue-gets-empty-for/m-p/381801#M244416 Please help the community grow and assist other in finding solutions that help or solve issues by taking a moment to login and click "Accept as Solution" below any response(s) that helped you. Thank you, Matt
... View more
03-27-2024
12:49 PM
2 Kudos
@s198 The List<abc> type processor are source based processors that do not accept inbound connections since they are designed to create FlowFiles and designed to modify existing FlowFiles. I am not clear on what "So we used Sqoop completion" does to create a FlowFile in your NiFi dataflow which is then passed to RouteOnAttribute (assuming this is processor you are referring to by "Router Attribute") via a connection. What Attributes exist on the FlowFile being processed by the RouteOnAttribute processor. Any FlowFile attributes on this FlowFile about the specific file needing to fetched by the FetchHDFS processor (like filename and path)? ----- If sqoop job output produced 1 FlowFile for each HDFS file to be fetched and each of those FlowFiles has attributes for path and filename of the HDFS file content to be fetched, you could do following: Set the default NiFi expression language statement "${path}/${filename}" in the "HDFS File Name" Property of the FetchHDFS processor. Those two FlowFile attributes are expected to be in the format: Attribute Name: Attribute value: filename The name of the file that will be read from HDFS. path The path is set to the absolute path of the file's directory on HDFS. For example, "/tmp/abc/1/2/3". Attribute names are case sensitive. ----- If the sqoop job simply outputs 1 FlowFile from which you expect to fetch a lot of HDFS files, that is not how FetchHDFS functions. FetchHDFS expects one FlowFile for each HDFS file content being fetched. FetchHDFS does create new FlowFiles, it only adds content to an existing FlowFile. If this matches your scenario, you may be able to use the GetHDFSFileInfo processor that does accept and inbound connection. It can be configured with just a path. If you set "Group Results = None" and "Destination = Attributes", you could send the produced FlowFiles to FetchHDFS to get the content for each FlowFile output. You would still need your RouteOnAttribute processor to make sure only FlowFiles where "${hdfs.type} = file" were routed to FetchHDFS and others types are discarded. You would probably also want an UpdateAttribute processor so you could set the filename of the FlowFile to the hdfs.objectName (done by adding dynamic property filename = ${hdfs.objectName}). Then feed those FlowFiles to your FetchHDFS processor configured to use the ${hdfs.path}${hdfs.objectName} NiFi Expression statement in the "HDFS File Name" Property. ------ If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-27-2024
12:30 PM
@jpalmer From the image you shared the bottleneck is actually in the custom non Apache NiFi out-of the-box PutGeoMesa 4.0.4 processor. A connection has backpressure settings to limit the amount of FlowFiles that can queue be queued (it is a soft limit which means back pressure gets applied once Connection backpressure threshold is reached or exceeded). Once backpressure is applied it will not be release until queue drops back below the configured thresholds. Backpressure when applied prevents the upstream processor from being scheduled to execute until that backpressure is removed. The connection turns red when backpressure is being applied and since the connection after PutGeoMesa 4.0.4 is not red, no backpressure is being applied on that processor. So you issue is the PutGeoMesa 4.0.4 is not able to process the FlowFiles being queued to it fast enough thus causing the backup in every upstream connection leading to the source processor. Since it is a custom processor I can't speak to its performance capabilities or tuning capabilities. I also don't know it the PutGeoMesa 4.0.4 processor will support concurrent executions either, but you could try: If you right click on the PutGeoMesa 4.0.4 processor and select configure, you can select the SCHEDULING tab. Within the Scheduling tab you can set "CONCURRENT TASKS". The default is 1 and this custom processor might ignore this property. What concurrent task does is allow the processor execute multiple times concurrently (so think of it as for each additional concurrent task, you are creating another identical processor). A processor component is scheduled to request a thread to execute base on the configured Run Schedule (for Timer Driven Scheduling Strategy the default 0 secs means schedule as fast as possible). So when it is scheduled the processor request a thread from the NiFi Timer Driven thread pool. That thread is then used to execute the processors code against a source connection FlowFile(s). The scheduler will the try to schedule it again based on run schedule and if concurrent task is still set to 1 and the previous execution is still running. it will not execute again until the in use thread finishes. But if you set concurrent tasks to say 3, the processor could potentially execute 3 threads concurrently (each thread working on different FlowFile(s) from source connection). Again I don't know if this custom processor will ignore this property or support it. Nor do I know if this processor was coded in a thread safe manor meaning that concurrent thread executions would not cause issues. so even if this appears to improve throughput, verify your data integrity coming out of the processor. Also keep in mind that adding concurrent tasks to a processor (especially a processor like this that appears to have long running threads. We can see it only processed 23 FlowFiles using 4.5 minutes of CPU time which is pretty slow) can quickly lead to this processor using all the available threads from the Max Timer Driven Thread pool resulting in other processors appearing to perform slower as they get an available thread to execute less often. You can increase the size of the Max Timer Driven Thread pool from the NiFi global menu in upper right corner, but need to do so carefully while monitoring CPU load average and memory usage as you slowly increase the setting. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more