Member since
01-27-2023
229
Posts
73
Kudos Received
45
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
704 | 02-23-2024 01:14 AM | |
909 | 01-26-2024 01:31 AM | |
618 | 11-22-2023 12:28 AM | |
1421 | 11-22-2023 12:10 AM | |
1612 | 11-06-2023 12:44 AM |
09-06-2023
12:05 PM
@MukaAddA Sorry, writing such script is not a strong area for me. I just happened to notice you were doing a session.create instead of a session.get. You may get better help by raising a new question on how to create a script to be executed by the ExecuteScript processor to accomplish your use case and provide details on that use case. I am sure there are others in the community that are good at writing such scripts. Matt
... View more
09-04-2023
01:16 PM
@code_mnkey Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
09-04-2023
07:45 AM
Hi everyone, I experienced the same error. After inspecting also the logs of nifi-registry, I found the error 2023-09-04 16:18:10,346 ERROR [NiFi Registry Web Server-17] o.a.n.r.web.mapper.ThrowableMapper An unexpected error has occurred: org.apache.nifi.registry.flow.FlowPersistenceException: Git directory /data/nifi01/nifi-registry-1.18.0/../nifiregistry_git is not clean or has uncommitted changes, resolve those changes first to save flow contents.. Returning Internal Server Error response.
org.apache.nifi.registry.flow.FlowPersistenceException: Git directory /data/nifi01/nifi-registry-1.18.0/../nifiregistry_git is not clean or has uncommitted changes, resolve those changes first to save flow contents. I changed to the path noted in the error message, changed to the user, which executes nifi-registry and checked the git repository status: git status Several files were modified and the git directory clearly was not "clean". I just committed and pushed everything (I had to set the git user name to make a successful commit). I don't know, why this all happend. But for now, its fixed.
... View more
08-29-2023
01:59 AM
Problem solved by setting Decimal() type in JoltTransformJSON"stake" processor: ${stake:toDecimal()}
... View more
08-28-2023
08:05 AM
@JohnnyRocks, as @steven-matison said, you should avoid linking so many ReplaceText. I am not quite sure I understood your flow exactly, but something tells me that before reaching ReplaceText, something is not properly configured in your NiFi Flow. First of all, when using the classic Java Data Format, MM will always transpose in a two digit month, meaning that month from 1 to 9 will be automatically appended with a leading zero. "dd" will do the same trick but for days. As I see in your post, you said that your CSV reader is configured to read the data as MM/dd/yy, which should be fine, but somehow something is missing here ---> How do you reach the format of dd/MM/yyyy? What I would personally try to do is to convert all those date values in the same format. So instead of all those ReplaceText, I would try to insert an UpdateRecord Processor, where I would define my RecordReader and my RecordWritter with the desired schemas (make sure that your column is type int with logicaly type date). Next, in that processor, I would change the Replacement Value Strategy into "Record Path Value" and I would press on + and add a new property. I would call it "/Launch_Date" (pay attention to the leading slash) and I would assign it the value " format( /Launch_Date, "dd/MM/yyyy", "Europe/Bucharest") " (or any other timezone you require -- if you require your data in UTC, just remove the coma and the timezone).
... View more
08-28-2023
12:51 AM
@dulanga, as far as I can tell from your previous post, you have around 3GB of RAM Memory available on your NiFi node, but you are assigning much more to your JVM. So, you have: total used free shared buff/cache available
Mem: 3.8Gi 1.5Gi 2.1Gi 145Mi 269Mi 2.1Gi
Swap: 511Mi 511Mi 0B But you are assigning much more to your JVM: # JVM memory settings
java.arg.2=-Xms4096m
java.arg.3=-Xmx8192m Try correcting your config files and assign the correct value for your JVM, in the bootstrap.conf file. Here are some best practices: https://community.cloudera.com/t5/Community-Articles/HDF-CFM-NIFI-Best-practices-for-setting-up-a-high/ta-p/244999
... View more
08-22-2023
08:51 AM
1 Kudo
Hi @Anderosn , If I understood you correctly then you are trying to duplicate the flowfile so that it can be sent to different processors, is that right? if that is the case then you can easily drag the same relationship multiple times from a given processor, so lets assume in the upstream processor where you are getting the result flowfile is sending this flowfile to the success relationship, then you can drag two success relationship to different downstream processors and process the same content differently in parallel. If that helps please accept solution. Thanks
... View more
08-22-2023
05:24 AM
@sahil0915 What you are proposing would require you to ingest into NiFi all ~100 million records from DC2, hash that record, write all ~100 million hashes to a map cache like Redis or HBase (which you would also need to install somewhere) using DistributedMapCache processor, then ingest all 100 million records from DC1, hash those records and finally compare the hash of those 100 million record with the hashes you added to the Distributed map cache using DetectDuplicate. Any records routed to non-duplicate would represent what is not in DC2. Then you would have to flush your Distributed Map Cache and repeat process except this time writing the hashes from DC3 to the Distributed Map Cache. I suspect this is going to perform poorly. You would have NiFi ingesting ~300 million records just to create hash for a one time comparison. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-21-2023
07:04 AM
Thank you, @cotopaul !! I'll try this.
... View more
08-20-2023
01:06 AM
I appreciate the comprehensive response, Thanks .
... View more