Member since
02-01-2022
281
Posts
103
Kudos Received
60
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1120 | 05-15-2025 05:45 AM | |
| 4951 | 06-12-2024 06:43 AM | |
| 7922 | 04-12-2024 06:05 AM | |
| 5833 | 12-07-2023 04:50 AM | |
| 3204 | 12-05-2023 06:22 AM |
09-07-2023
07:30 PM
So I copied only those nars which we use, and container could launch now. Though I have to remove few nars which were causing issues, like nifi-ssl-context-service-nar-1.10.0.nar. And now existing flows dont have issues with properties which are obsolete in 1.22.0 as 1.10.0 nars are used for those components. Thanks for all the inputs.
... View more
08-28-2023
08:05 AM
@JohnnyRocks, as @steven-matison said, you should avoid linking so many ReplaceText. I am not quite sure I understood your flow exactly, but something tells me that before reaching ReplaceText, something is not properly configured in your NiFi Flow. First of all, when using the classic Java Data Format, MM will always transpose in a two digit month, meaning that month from 1 to 9 will be automatically appended with a leading zero. "dd" will do the same trick but for days. As I see in your post, you said that your CSV reader is configured to read the data as MM/dd/yy, which should be fine, but somehow something is missing here ---> How do you reach the format of dd/MM/yyyy? What I would personally try to do is to convert all those date values in the same format. So instead of all those ReplaceText, I would try to insert an UpdateRecord Processor, where I would define my RecordReader and my RecordWritter with the desired schemas (make sure that your column is type int with logicaly type date). Next, in that processor, I would change the Replacement Value Strategy into "Record Path Value" and I would press on + and add a new property. I would call it "/Launch_Date" (pay attention to the leading slash) and I would assign it the value " format( /Launch_Date, "dd/MM/yyyy", "Europe/Bucharest") " (or any other timezone you require -- if you require your data in UTC, just remove the coma and the timezone).
... View more
08-24-2023
12:31 PM
@kothari It is not Ranger's job to inform the client applications using Ranger what users belong to what group. Each client application is responsible for determining which groups the user authenticated into that service belong to. The policies generated by Ranger are downloaded by the client applications. Within that downloaded policy json will be a resource identifier(s), list if user identities authorized (read, write, and/or delete) , and list of group identities authorized (read, write, or delete) against each resource identifier. So when client checks the downloaded policies from Ranger it is looking for the user identity being authorized and if client is aware of the group(s) that user belongs to, will also check authorization for that group identity. so in your case, it i s most likely that your client service/application has not been configured with the same user and group association setup in your Ranger service. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-24-2023
06:17 AM
I'm facing the same issue with the ADF Hive connector. It would be great if you could provide your configuration details.
... View more
08-22-2023
05:24 AM
@sahil0915 What you are proposing would require you to ingest into NiFi all ~100 million records from DC2, hash that record, write all ~100 million hashes to a map cache like Redis or HBase (which you would also need to install somewhere) using DistributedMapCache processor, then ingest all 100 million records from DC1, hash those records and finally compare the hash of those 100 million record with the hashes you added to the Distributed map cache using DetectDuplicate. Any records routed to non-duplicate would represent what is not in DC2. Then you would have to flush your Distributed Map Cache and repeat process except this time writing the hashes from DC3 to the Distributed Map Cache. I suspect this is going to perform poorly. You would have NiFi ingesting ~300 million records just to create hash for a one time comparison. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-21-2023
08:37 AM
Let's take this a different direction... open up a code box in your reply. Choose Preformatted: Insert Lines 0 - 11 here Remove anything sensitive of course.
... View more
08-21-2023
06:29 AM
@learner-loading were you able to resolve your issue? If any of the above posts were the solution please mark the appropriate, as it will make it easier for others to find the answer in the future.
... View more
08-18-2023
08:23 AM
2 Kudos
For information, jira ticket created. https://issues.apache.org/jira/browse/NIFI-11967
... View more
08-17-2023
06:07 AM
@sree21 You should be able to download the driver and use it anywhere. The license is just for hive endpoint itself. You can find that download page here: https://www.cloudera.com/downloads/connectors/hive/odbc/2-6-1.html
... View more
08-17-2023
04:49 AM
1 Kudo
Thanks. It was an issue with the connection string. It was pointing to another database while my processor was inserting into another database.
... View more