Support Questions

Find answers, ask questions, and share your expertise

ExecuteSQL dynamic query..

avatar
Rising Star

Beloved forum,

is it possible to have dynamic query using ExecuteSQL processor ?

e.x.

FileX.txt content (comma separated)
XXXXX, BBBBB, CCCCC, CCCDD
XXXXX, EEEEE, CCCCC, DDDDD

outputstream
XXXXX, BBBBB, CCCCC, CCCDD, 'select customer_name from table where id=CCCDD limit 1'
XXXXX, EEEEE, CCCCC, DDDDD, 'select customer_name from table where id=DDDDD limit 1'

I want to amend the select statement result to the original file ?

Thankssss

1 ACCEPTED SOLUTION

avatar
Master Guru

So if the customer_name value for id=CCCDD was "Matt" then you'd like the first output row to read:

XXXXX, BBBBB, CCCCC, CCCDD, Matt

Is that correct? If so, you could do the following:

  1. Use SplitText to split the incoming CSV into one flow file per line
  2. ExtractText to store the four column values as attributes (example template called Working_With_CSV here), let's assume the attribute for the fourth column is called "column.4"
  3. ReplaceText to set the content of the flow file to a SQL statement "select customer_name from table where id=${column.4} limit 1"
  4. ExecuteSQL to execute the statement
  5. ConvertAvroToJson to get the record into JSON (for further processing)
  6. EvaluateJsonPath to get the value of customer_name into an attribute (named "customer.name" with a JSON Path of $[0].customer_name or something like that)
  7. ReplaceText to set the row back to the original columns plus the new one, with something like "${column.1},${column.2},${column.3},${column.4}, ${customer.name}"
  8. (optional) MergeContent to join the rows back together (if you need them as one file)

View solution in original post

6 REPLIES 6

avatar
Master Guru

So if the customer_name value for id=CCCDD was "Matt" then you'd like the first output row to read:

XXXXX, BBBBB, CCCCC, CCCDD, Matt

Is that correct? If so, you could do the following:

  1. Use SplitText to split the incoming CSV into one flow file per line
  2. ExtractText to store the four column values as attributes (example template called Working_With_CSV here), let's assume the attribute for the fourth column is called "column.4"
  3. ReplaceText to set the content of the flow file to a SQL statement "select customer_name from table where id=${column.4} limit 1"
  4. ExecuteSQL to execute the statement
  5. ConvertAvroToJson to get the record into JSON (for further processing)
  6. EvaluateJsonPath to get the value of customer_name into an attribute (named "customer.name" with a JSON Path of $[0].customer_name or something like that)
  7. ReplaceText to set the row back to the original columns plus the new one, with something like "${column.1},${column.2},${column.3},${column.4}, ${customer.name}"
  8. (optional) MergeContent to join the rows back together (if you need them as one file)

avatar
Rising Star

its working, many thanks..

is it possible to have the same through executescript processor ?

avatar
Rising Star

is it possible to route files based on file content,

route to processor A if field2 = BBBBB, and to processor B if field2 = AAAA ? @Matt Burgess

avatar
Master Guru

Yes, you can use something like the regex from step 2 above in a RouteOnContent processor, or after the ExtractText (step 2 above) you can use RouteOnAttribute looking for values of column.2.

avatar
Rising Star

Thanks @Matt Burgess

shall I put the regex as property ? can you advise further how to define it ?

Thanks a million

avatar
Master Guru

Yes you can add a dynamic property whose value is a regular expression (see the documentation for more details).