Hi, I want to load data from hive table to MySql table using HDF (nifi). I have created the following dataflow.
After execution of dataflow, it is showing 151 bytes data has written the destination table. However there is no data in the destination table.
Please help me if there is any configuration issue.
The ConvertJSONToSQL processor merely converts a JSON object into a SQL query that can be executed. After it is converted, you haven't sent the command anywhere yet. You need another processor after that in order to actually send the command to be executed by a system. The PutSQL processor (as suggested by @Jobin George) executes the SQL UPDATE or INSERT command that is in the contents of the incoming FlowFile.
As suggested by @Jobin George , I have added PutSQL processor and it inserted only one record. However i have 100 records are coming from ExecuteSql processor( contains select query: select * from table_name) and want to insert 100 records. Please help me if any configuration required for this requirement.
Hi @Prasanta Sahoo,
Can you try "ConvertAvroToJSON" processor with below configuration:
For "JSON container options" Instead of "none" try "array"
That determines how the stream flows as an array or single object.