Member since
07-29-2020
558
Posts
307
Kudos Received
167
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
112 | 11-28-2024 06:07 AM | |
78 | 11-25-2024 09:21 AM | |
213 | 11-22-2024 03:12 AM | |
116 | 11-20-2024 09:03 AM | |
314 | 10-29-2024 03:05 AM |
12-04-2024
06:51 AM
Hi @DuyChan , Have you tried using DistributedMapCacheClientService & DistributedMapCacheClientServer instead. Im not sure what is the difference with the MapCacheClientService but it should do the same job. Also be aware because of size limitation Nifi is not being published with all packages and in case you dont find services or processors that should be part of nifi , you probably need to download jar\nar pacakge from maven repositories and save to the Nifi Lib Folder: https://mvnrepository.com/artifact/org.apache.nifi/nifi-hazelcast-services-api-nar/2.0.0 https://mvnrepository.com/artifact/org.apache.nifi/nifi-distributed-cache-client-service-api If that helps please accept the solution. Thanks
... View more
12-03-2024
09:21 AM
1 Kudo
@SAMSAL , Thanks for you reply, infact i've asked from Kafka side itself not to send Null values, that sorted issue.
... View more
12-03-2024
06:01 AM
2 Kudos
Hi @SS_Jin , Glad to hear that my post helped. Its really hard to suggest something specially when I dont have all the details of what you are trying to do but from what I read, I think Join\Fork Enrichment would work better in these scenarios. The mergerecord way could be problematic when you are reading multiple sources and multiple CSVs where merge behavior can be unpredictable. Also depending on what type of enrichment you are trying to do and how complex its but if you have one to one mapping between record in the DB vs CSV and you are trying to override some data or add new one then you might also consider the LookupRecord processor to simplify your data flow where you dont have to use branching to read and then merge the different sources which might endup saving you some overhead. https://community.cloudera.com/t5/Community-Articles/Data-flow-enrichment-with-NiFi-part-1-LookupRecord-processor/ta-p/246940
... View more
12-03-2024
05:20 AM
1 Kudo
Hi Samsal, That's very unfortunate. I've attempted the Mac version. All examples I've found by others, whether written or on youtube, only demonstrate setting it up with a localhost. This slightly older example (https://www.youtube.com/watch?v=LanpbWR7Gv8) of using certificates with multiple users is great (it passes over a few minor modifications), and would be better if it also demonstrated setting it up using a host name other than localhost, because I can't think of a use case with multiple users calling localhost, however it doesn't make use of Docker either. I hope someone else chimes in here and offers us some guidance for a very typical installation, in my opinion. Thanks for your response!
... View more
12-03-2024
04:36 AM
1 Kudo
Hi @Mikhai , Its hard to say what is going on without looking at the data itself or seeing the ExcelReader Configuration. I know providing the data is not easy but if you can replicate the issue using dummy data then please share. Also if you can provide more details on how you configured the ExcelReader, for example are you using custom schema or infering the schema? I would try the following: 1- Try to find table boundary in excel and delete empty rows. If you cant then for sake of testing copy the table with the rows you need into new excel and see if that works. 2- If ExcelReader works with 545 rows , then I will try and provide custom schema - if not provided - and try to set some of the fields where there should be a value to not allow null. Maybe by doing so it will help the ExcelReader not to import empty rows. I tried to use ExcelReader before but ran into issues when the excel has some formula columns because of a bug in the reader itself. Im not sure if those issues were addressed but as workaround I used Python Extension to develop custom processor that takes excel input and convert into Json using Pandas library. This might be an option to consider if you are still having problems with the ExcelReader service but you have to use Nifi 2.0 version in order to use python extension. If that helps please accept the solution, Thanks
... View more
11-30-2024
11:42 AM
1 Kudo
Hi @Vikas-Nifi , I think can avoid a lot of overhead such as writing the data to the DB for just doing the transformation and assigning the fixed width (unless you need to store the data in the DB). You can use processors like QueryRecord, UpdateRecord to do the needed transformation of data in bulk vs one record at a time and one field at a time. In QueryRecord you can use SQL like function based on apache calcite sql syntax to make transformation or derive new columns just as if you are doing mysql query. UpadateRecord also you can use Nifi Record Path to traverse fields and apply functions in bulk vs one record at a time. There is also a FreeFormTextRecordSetWriter service that you can use to create custom format as an output. For example in the following dataflow, Im using ConvertRecord process with CSVReader and FreeFormTextRecordSetWriter to produce desired out: The GenerateFlowFile processor is used to create the input CSV in flowfile: The ConvertRecord is configured as follows: The CSVReader you can use default configuration. The FreeFormTextRecordSetWriter is configured as follows: In the Text Property you can use the columns\fields names as listed in the input and provided to the reader . You can also use Nifi Expression Language to do proper formatting and transformation to the written data as follows: ${DATE:replace('-',''):append(${CARD_TYPE}):padRight(28,' ')}${CUST_NAME:padRight(20,' ')}${PAYMENT_AMOUNT:padRight(10,' ')}${PAYMENT_TYPE:padRight(10,' ')} This will produce the following output: 20241129Visa Test1 0.01 Credit Card
20241129Master Test2 10.0 Credit Card
20241129American Express Test3 500.0 Credit Card I know this not 100% what you need but it should give you an idea what you need to do to get the desired output. Hope that helps and if it does, please accept the solution. Let me know if you have any other questions. Thanks
... View more
11-28-2024
08:27 PM
1 Kudo
I tried to delete the data you mentioned, but I don't know how to edit the topic. Thank you very much for your support.
... View more
11-27-2024
11:12 AM
Sure, If you come up with a solution different than what I suggested please do post about it so it can help others who might run into similar situation. good luck
... View more
11-25-2024
09:21 AM
Hi , I dont see a function toNumber in the record path syntax , so Im not sure how did you come up with this. It would be helpful next time if you provide the following information: 1- input format. 2- screenshot of the processor configuration causing the error. As for your problem , the easiest and more efficient way - than splitting records- I can think of is using the QueryRecrod processor. lets assume you have the following csv input: id,date_time
1234,2024-11-24 19:43:17
5678,2024-11-24 01:10:10 You can pass the input to the QueryRecord Processor with the following config: The query above is added as a dynamic property which will expose new relationship with the property name that you can use to get the desired output. The query syntax is the following: select id,TIMESTAMPADD(HOUR, -3,date_time) as date_time from flowfile The trick for this to work is how you configure the CSV Reader and Writer to set the expectation on how to parse datetime fields in the reader\writer services: For the CSVReader, Make sure to set the following: CSVRecordSetWriter: Output through Result relationship: id,date_time
1234,2024-11-24 16:43:17
5678,2024-11-23 22:10:10 Hope that helps. If it does, please accept solution. Thanks
... View more
11-22-2024
05:44 AM
1 Kudo
@SAMSAL Jeez... I should not have prepared that flow at the end of 12 hour work... Of course, it works now, sorry for troubles and thanks for quick support
... View more