Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to do row count using Nifi in source table and target table while ingestion

avatar
New Contributor

I am ingesting data from oracle to hive using sqoop . I want to know whether i can use nifi to count the no of rows in oracle table is same as the no of rows in target table after ingestion.

1 ACCEPTED SOLUTION

avatar
Contributor

Here i used validation_table attribute to have the table name in the flow file

39580-dia2.png

Create your own logic to count the rows from oracle and hive . Then merge the 2 flows using merge processor.

I have created a process group to count oracle table and another for counting hive which will add a oracle_cnt attribute and hive_cnt attribute with the result.

The result is merged to a single flow file by correlating using the co relation attribute name . Allso mention the attribute strategy as "keep all unique attribute"

39581-dia1.png

View solution in original post

6 REPLIES 6

avatar
Contributor

Sure , You can do that with MergeContent Processor . if you are using only source and target then you can set the processor property Min no of entries to 2 and max no of entries to 2 and also mention a correlation attribute to do the merge.

avatar
New Contributor

Can you help me with some example on how to do it

avatar
Contributor

Here i used validation_table attribute to have the table name in the flow file

39580-dia2.png

Create your own logic to count the rows from oracle and hive . Then merge the 2 flows using merge processor.

I have created a process group to count oracle table and another for counting hive which will add a oracle_cnt attribute and hive_cnt attribute with the result.

The result is merged to a single flow file by correlating using the co relation attribute name . Allso mention the attribute strategy as "keep all unique attribute"

39581-dia1.png

avatar
Explorer

Can you please post the template, I am trying to solve the same problem. It would be a great help for me

avatar
Super Collaborator

@Aneena Paul how much volume of data is being moved as part of the sqoop. IF the volume is not too high, why not simply use nifi for moving data from oracle to hive. Nifi can easily handle anything in the GB ranges for daily / hourly jobs. A simple flow would be Qeneratetablefetch -> RPG->executesql->puthdfs.

avatar
Super Collaborator

This will give you provenance in nifi, which provides you with confirmation of how much data in bytes was extracted and sent to hdfs, so no need to do this additional check.