Member since
07-25-2022
18
Posts
7
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
294 | 08-14-2024 01:31 PM | |
1375 | 08-14-2024 12:52 PM | |
540 | 04-04-2024 08:45 AM |
09-09-2024
03:47 PM
Thanks @SAMSAL for the clarification and the alternatives for looping in Nifi. We will consider using DuplicateFlowFile processor. Do you think nifi documentation should be updated to explicitly mention that it is not only for load tests but also can be used in production flows when there is need to clone flow files ?
... View more
09-06-2024
02:01 PM
1 Kudo
Appreciate any help with my above question about DuplicateFlowFile usage. @SAMSAL @MattWho
... View more
09-04-2024
02:51 PM
@AlokKumar I am not able to understand the context here. It would help to get to right solution if you can explain why the inserts into same table should be executed in certain order. However, Nifi Connection configuration settings have prioritizers. Please check screenshot below.. If you are able to set attribute "priority" with desired number, "PriorityAttributePrioritizer" can be used. Please note that these prioritizers operate on node level. You may have to test these if you have multi node nifi environment. There is a processor "EnforceOrder" which has more options to enforce execution order. You may check this processor and find out if it works. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/nifi2.0-m4/org.apache.nifi.processors.standard.EnforceOrder/index.html
... View more
09-04-2024
01:35 PM
Hi I have a question on our Nifi flow design and specifically about the usage of DuplicateFlowFile processor. The latest documentation mentions that, it is intended for load test. Does that mean, it can not be used in regular production flows? Please confirm. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/nifi-2.0.0-M4/org.apache.nifi.processors.standard.DuplicateFlowFile/index.html "Intended for load testing, this processor will create the configured number of copies of each incoming FlowFile." Please verify the current and improved designs below and advise if any implications with the new approach (where DuplicateFlowFile processor is used to clone flow files) Current Design: New Design is below (with DuplicateFlowFile processor)
... View more
Labels:
- Labels:
-
Apache NiFi
08-16-2024
08:43 AM
@Adyant001 Oracle has auto increment IDENTITY feature as below. Table definition takes care of setting primary key incremental value. No need to send this value in json payload I think all other major RDBMS have similar feature to assign auto increment value. CREATE TABLE roletab (id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1 INCREMENT BY 1 NOCYCLE), role_id NUMBER, role_name VARCHAR(100), PRIMARY KEY (id) ); INSERT INTO roletab (role_id, role_name) VALUES (10, 'Admin'); INSERT INTO roletab (role_id, role_name) VALUES (20, 'Developer'); commit; SELECT * FROM roletab; ID |ROLE_ID|ROLE_NAME| --+-------+---------+ 1 | 10 | Admin | 2 | 20 |Developer|
... View more
08-14-2024
01:31 PM
1 Kudo
@Former Member LookupRecord processor can be used in this case. Please check if this method works for you. Sample CSV input id,brand,phone 10,Samsung,1 20,Apple,2 Expected output (codes 1 and 2 are replaced with names) id,brand,phone 10,Samsung,Note 20,Apple,iPhone I tried nifi flow as below Lookup record configuration CSV Reader CSV Recordset Writer Didnt make any change. Used default settings Simple KeyValue Lookup service. There are several lookup services available in Nifi. You may use right service as per your requirement.
... View more
08-14-2024
12:52 PM
1 Kudo
I would do this way. Please see if this works for you. 1. Use JolttransformJSON processor to alter the input JSON Spec: [ { "operation": "shift", "spec": { "id": ["role.role_id", "user.id"], "name": "user.name", "age": "user.age", "role": { "role": "role.role_name" } } } ] 2. Pass the transformed JSON to first PutDatabaseRecord processor to insert into user table Set "Data Record Path" = user 3. Add another PutDatabaseRecord processor to insert into role table Set "Data Record Path" = role
... View more
08-13-2024
12:17 PM
@Adyant001 Can you share sample json structure ? If source json contains both parent and child information, you can have two PutDatabaseRecord processors connected in series with right "Data Record Path" mentioned. If child record needs information from parent level, JoltTransformJson processor can be used to create transformed json before passing to PutDatabaseRecord.
... View more
08-13-2024
11:58 AM
Hi, Currently we are using ConsumeAMQP processor to fetch messages from Rabbitmq queues. There are some use cases in our system where Rabbitmq streams seem to be a good fit. I have tested ConsumeAMQP to consume messages from Rabbitmq streams queue, but it doesn't work as expected. I believe the new functionality related to OFFSET management is not available in ConsumeAMQP processor. Any insights on this would be helpful. Am I missing something? If functionality is not part of current processor and would like it to be enhanced, how should I request? Thanks in advance.
... View more
Labels:
- Labels:
-
Apache NiFi
07-25-2024
02:11 PM
1 Kudo
Hi Nifi Experts, We are looking for best practice / suggestion for our use case below. a) There are around 100 tables in source database b) This source data needs to be synced into various target databases (after optional transformation) c) Each source tables may need to be synced to multiple target databases tables. So one extract on source table, might be used to load into multiple target tables which have similar structures d) As of today, we have implemented flows using the core processors ConsumeAMQP - ExecuteSQLRecord - PutDatabaseRecord . e) The challenge we face today is with the growing number of processors in proportion with the number of source/target tables. We are looking for a solution to minimize the number of processors by sharing ExecuteSQLRecord - PutDatabaseRecord processors for multiple tables sync. As a proof of concept, we tried to assign the database name, query, target database, target table name, keyfield etc., dynamically by using LookupRecord processor. Internally, tested with SimpleDatabase lookup service and Properfile lookup service to help assign required attributes dynamically. Please advise if this is good or any other best practice to handle data sync among tables dynamically. Core requirement is to have generic flows for multiple tables rather than dedicated flow for each table. Please let me know if more details are required. Thanking you in advance!
... View more
Labels:
- Labels:
-
Apache NiFi