Member since
06-26-2015
515
Posts
140
Kudos Received
114
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2578 | 09-20-2022 03:33 PM | |
| 6970 | 09-19-2022 04:47 PM | |
| 3680 | 09-11-2022 05:01 PM | |
| 4288 | 09-06-2022 02:23 PM | |
| 6797 | 09-06-2022 04:30 AM |
07-31-2022
06:21 PM
@KhASQ , Besides @SAMSAL solution, you can also use ReplaceText to eliminate the need of extracting the entire content as an attribute. You'd still have to set a large enough buffer, though, to ensure your largest message could be processed. Cheers, André
... View more
07-31-2022
02:44 PM
@NJK , The availability of Atlas (and the number of nines you get) will depend on your implementation. Check this page for more information on Atlas high availability options. The more independent server you have backing the Atlas service, the more nines you'll get. Cheers, André
... View more
07-22-2022
08:59 AM
Hi All, The problem was on the JDK version. We were using OpenJDK 11.0.2 which had a bug in the TLS handshake. Solution: Upgrade JDK (now using 11.0.15).
... View more
07-13-2022
11:20 PM
1 Kudo
@Lewis_King , Here's an idea. You can fork the "a" output of the QueryRecord processor and send it to a sequence of processors as shown below: The ReplaceText processor will simply replace the entire contents of the flowfile with the information you want to register in the log. For example: This will produce one row per flow file with the source type ("a") and the timestamp. You can them send these to a MergeRecord to avoid saving to many small log files and them to a PutFile to persist the log. You can process the "b" output in a similar way. Cheers, André
... View more
07-13-2022
07:57 AM
@Drozu, Have any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
07-13-2022
06:45 AM
1 Kudo
@MarioFRS , You can set it to the following: @([^@]*)@ Cheers, André
... View more
07-13-2022
06:42 AM
@AbhishekSingh , You can use the following expression instead to format the date as you want in the UpdateRecord processor: /subscription_start_at_timestamp -> format(toDate(/subscription_start_at, "EEE MMM d HH:mm:ss z yyyy"), "yyyy-MM-dd HH:mm:ss") Cheers, André
... View more
07-13-2022
05:16 AM
Hello Matt, Thank you ! this solved the error (now I'm facing another one, but will figure it out 🙂 ). For further reference I had to configure those 3 lines in nifi.properties : nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?) nifi.security.identity.mapping.transform.dn=NONE nifi.security.identity.mapping.value.dn=$1@$2 Thanks. Vince.
... View more
07-12-2022
07:28 PM
1 Kudo
Thank you so much for the help! this solved my problem
... View more
07-08-2022
11:47 AM
@Luwi An "active content claim" would be any content claim where a FlowFile exist still referencing bytes of content in that claim. A NiFi content claim file can contain the content for 1 too many FlowFiles. So all it takes is one small FlowFile still queued in some connection anywhere on your NiFi canvas to prevent a content claim from being eligible to be moved to archive. This is why the total reported content queued on yoru canvas will never match the disk usage in your content_repository. This article is useful in understanding this process more: https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 Thank you, Matt
... View more