<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Need help with Database migration with NiFi 1.9.2 in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Need-help-with-Database-migration-with-NiFi-1-9-2/m-p/284907#M211509</link>
    <description>&lt;P&gt;I'm trying to migrate data from an Oracle Database to a PostGres Database.&amp;nbsp; I'm using an example I found online, but it's very vague on how the Processors need to be set up.&amp;nbsp; The red text under the graphic is the instructions on the diagram.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Migration process group.PNG" style="width: 736px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/25628i9E319BFD025E032A/image-size/large?v=v2&amp;amp;px=999" role="button" title="Migration process group.PNG" alt="Migration process group.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H3&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Get Table List&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;ListDatabaseTables: There may be a need to migrate all tables or some of the tables only. So, the initial task is to list the tables to migrate. Use ListDatabaseTables processor to fetch the list of tables to be migrated. Edit the properties of the processor to connect to source database and filter out the list of tables. Properties: Catalog, Schema Pattern, Table Name Pattern and Table Types to filter out the list of tables. The processor outputs a single flowfile for each table in the list, and to each flowfile, adds the following properties: db.table.name, db.table.catalog, db.table.schema, db.table.name. This processor adds the details of source database and table associated with the flowfile.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;UpdateAttribute: The ListDatabaseTables processor adds the table properties for the source database system. To add the destination database properties, use UpdateAttrbiute processor. Add following properties: ‘destination_table_catalog’ and ‘destination_table_schema’. Table name will be same as in the source database (db.table.name).&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;&amp;nbsp;Create Table&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;ExecuteSQL: After each flowfile has the information of the source database, the destination database, and the table to migrate, the next step is to create the table in the destination database. Use the ExecuteSQL processor to fetch the table schema of the table from the source database. The SQL statement to fetch the table schema may vary as per the type of the source database. The ExecuteSQL outputs the Avro flowfile which contains table schema.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;ConvertAvroToJson: The Avro format flowfile is converted to Json format using the ConvertAvroToJson processor.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;ExecuteScript: Use ExecuteScript processor to generate Data Definition Language (DDL) statement to create table for the destination database. Through the ExecuteScript processor, scripting language such as python, groovy, ruby etc. can be used to read the Json format schema of the table and create the DDL statement.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;PutSQL: To execute the DDL statement to create table in the destination database, use PutSQL processor.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;Extract Data&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;ExecuteSQL: Use ExecuteSQL processor to fetch table data from the source database. Add the SQL select statement ‘select * from ${db.table.catalog}.${db.table.schema}.${db.table.name};’. &amp;nbsp;This processor outputs the Avro format flowfile with all the data of the source table. Alternatively, GenerateTableFetch and QueryDatabase processors can be used to extract data, which will be discussed in the next blog post.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;Ingest Data&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;PutDatabaseRecord: The PutDatabaseRecord processor reads the Avro data, creates a batch upload and uploads the data to the table created in the destination table. Data ingestion can also be performed using ConvertAvroToJson and PutSQL processor. However, PutDatabaseRecord is better and faster way to upload data. PutSQL processor can possibly be used if the extracted data requires some transformations.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;First of all, both of my DB Connections are working, I've verified them.&amp;nbsp; I have my ListDatabaseTables set up to get 4 tables, and I can view the table names in the view state of the processor.&amp;nbsp; The UpdateAttribute processor has had 2 attributes added per the instructions.&amp;nbsp; I don't really know what else to do with it.&amp;nbsp; 4 flow files are created though and flow through successfully.&amp;nbsp; The next processor, ExecuteSQL, uses the following SQL Select Query:&amp;nbsp; &lt;STRONG&gt;SELECT DBMS_METADATA.GETDDL('TABLE',u.table_name) FROM USER_ALL_TABLES u WHERE u.nested='NO' AND (u.iot_type is null or u.iot_type='IOT')&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;I don't see where I'm getting any output, and when I run the SQL in SQL Developer, I get nothing.&amp;nbsp; Again though, 4 flow files successfully pass through.&amp;nbsp; The ConvertAvroToJSON processor, I just used the defaults.&amp;nbsp; Again, 4 flow files successfully pass through.&amp;nbsp; I don't know what I'm doing, I'm very new to this.&amp;nbsp; I'm looking at the data provenance and see things happening, but I really don't know what is happening and I can't find out how to view the contents of the flow files to see what is being passed through.&amp;nbsp; It has the appearance of working, but how do I prove it, and what am I doing wrong?&amp;nbsp; I haven't gotten past the ConvertAvroToJSON processor portion of the example because I've stopped there.&amp;nbsp; However, the ExecuteScript processor has an error in it because I have no script, again, still learning!&lt;/P&gt;</description>
    <pubDate>Fri, 06 Dec 2019 06:35:35 GMT</pubDate>
    <dc:creator>wann_douglas</dc:creator>
    <dc:date>2019-12-06T06:35:35Z</dc:date>
  </channel>
</rss>

