Member since
03-11-2022
30
Posts
11
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
833 | 03-26-2024 07:33 AM | |
10750 | 01-22-2024 06:12 AM | |
823 | 01-12-2024 08:59 AM |
08-05-2024
05:31 AM
HI Matt, thanks for your reply and info shared. We did some changes, so now signing then encrypting the files: signing encrypting When our next system tries to decrypt the file we get this error: gpg: encrypted with unknown algorithm 183 gpg: decryption failed: Invalid cipher algorithm Googling on this, encrypted with unknown algorithm 183, doesn't really give anything usefull. So were trying to figure out, if the issues we're facing is nifi related or on the decrypting side. Any idea's would help. Thank you.
... View more
08-01-2024
07:02 AM
Extra info: here is the steandaardpgppublishkeyservice configuration. gpg-pub.asc was exported from the old linux system.
... View more
08-01-2024
07:00 AM
Hi there, Now for something totally different 🙂 we need to encrypt and decrypt some files for a flow we are working on. Initially gpg was configured on a linux server and we want to move this functionality to nifi. On the linux server we are using this version of gpg: gpg (GnuPG) 2.0.22 libgcrypt 1.5.3 We encrypt a test file and get a file with a size of 656 bytes. This file can be decrypted by the next system that needs to process this file. We configure nifi as follow: When reading the same file in the flow, all goes well and we send the output to the next system. Which unfortunately fails to decrypt the file. They get an invalid key length error. So I checked the file size from nifi and looks like 394 bytes only. The keyring files used in public key service is an export from the linux system. Which should work fine. Passphrase and all are the same. So we were wondering which gpg is being used by nifi??? Could it be a version compatibility issue? or something else. Any tips would be very helpfull 🤓 Thanks in advanced. Regards, Dave
... View more
Labels:
- Labels:
-
Apache NiFi
06-19-2024
06:20 AM
1 Kudo
Update from our side, from the looks of things, stopping and removing the mergecontent used to create csv files has solved the issue regarding the jvm heap. We will watch the MEMORY resource on processors when implementing new stuff. Thank you all for the great advices and fast replies!
... View more
06-14-2024
06:43 AM
Hi Matt, thank you for the extensive reply. This is a lot to think about. We’ll go thru this monday morning with the team to see if we can get those metrics on our dashboards 🤓 There was a mergecontent temporarily on the canvas to gather analysis data, which might have caused this issue. This template has already been removed yesterday. @SAMSAL @MattWho thank you for your replies. I’ll get back to you monday on our findings.
... View more
06-14-2024
06:27 AM
Hi there @SAMSAL , we are using version 1.24 and no python scripting. We’re using all standaard processors; updateattribute, routeonattribute, logattribute and some specials like, getsqs, executesql, lookupattribute and some groovy script for creating custom metrics. We’ve just restarted the node with high heap. Now averaging at 10%. We’ll be monitoring the progress this weekend and get back on the results monday. Hopefully the restart helped out. Thank you for now and have a nice weekend 🙂
... View more
06-14-2024
04:14 AM
1 Kudo
Correction, the primary node switched itself and the high heap utilization stays on the original node. So, the problem doesn’t seem to stay on the current primary node.
... View more
06-14-2024
04:07 AM
1 Kudo
Here’s some extra info: jvm - high utilization is the primary node core load is normal
... View more
06-14-2024
03:21 AM
1 Kudo
Hi there, Maybe you guys can shed a light on an issue we’ve been having since we went live with one of our flows using alot of processors running on the primary node. We are using a 2 node cluster where we run a couple of getsqs (time driven, 1min) and executesql (cron scheduling, spread out during the day both on primary node. We do this so only one task is ran resulting in just on trigger flow file. Other wise we get 2 sqs events for the same event and 2 executesqls queries. What we have been seeing is that the jvm heap utilization on the primary node is above 70%, this increased gradually the past few days from 25%, so didn’t peak straight to 70%. The secondary node on the other hand is on a stable below 10%. So quite clear that there is a correlation between the processors running only on primary and the heap used. Our questions are: 1. When a primary node processor triggers, will the flow file be processed by the following processors in the flow only on the primary node? Or can this also be done by the secondary? Other processors are running on all nodes. 2. Could executesql on cron cause high heap utilization? Cron’s run spread out during the day and volumes are low. Around 1300 a day. 3. Could getsqs cause high heap utilization? We get a lot of events, where we filter out later in the flow which ones we need and terminate those not needed. 8k events spread out during the day where we only process about 3k of these. We are working on finetuning the sqs events so we only receive the one that really needs to be processed. Hopefully you can give advise on the challenge we are having. By the way, when restarting the primary node, the secondary becomes primary and have the same heap issue there. Thank you in advanced. Kind regards, Dave
... View more
Labels:
- Labels:
-
Apache NiFi
03-26-2024
07:33 AM
hi @hegde , thank you for your fast reply. This would have worked indeed. Fortunately, we found the culprit on our matter. Someone made manual changes to the nifi github repo which was used by the nifi-registry. We reverted the changes in github and now the registry seems to work again 🙂
... View more