Member since
07-29-2020
574
Posts
323
Kudos Received
176
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3584 | 12-20-2024 05:49 AM | |
| 3829 | 12-19-2024 08:33 PM | |
| 3630 | 12-19-2024 06:48 AM | |
| 2365 | 12-17-2024 12:56 PM | |
| 3115 | 12-16-2024 04:38 AM |
07-20-2023
06:34 AM
@PradNiFi1236, Is the invoice number unique? If the filename matches between the two format, have you tried using this instead? Of course you have to derive just the filename without the extension in another attribute first and then use the new attribute for the Correlation Attribute. Also try resetting the "Minimum Number of Entries" to 1 . If none of that helped can you please post screenshot of the mergeContent processor configurations with other critical processors configurations as well. Thanks
... View more
07-19-2023
01:08 PM
Hi @Dataengineer1 , This has been asked before , please refer to : https://community.cloudera.com/t5/Support-Questions/NIFI-Is-it-possible-to-make-a-x-www-form-urlencoded-POST/m-p/339398 Thanks
... View more
07-19-2023
12:48 PM
Hi @PradNiFi1236 , Not sure I fully understand your question but based on what I read you are trying to merge based on common attribute. If that is the case the MergeContent allows you to do that by utilizing the property "Correlation Attribute Name": " If specified, like FlowFiles will be binned together, where 'like FlowFiles' means FlowFiles that have the same value for this Attribute. If not specified, FlowFiles are bundled by the order in which they are pulled from the queue. Supports Expression Language: true (will be evaluated using flow file attributes and variable registry) This Property is only considered if the <Merge Strategy> Property has a value of "Bin-Packing Algorithm". " Hope that helps.
... View more
07-19-2023
10:20 AM
hmmm, I dont see any issue with what you have sent. What is your JsonTreeReader & JsonRecordSetWriter looks like. Mine looks like the following: Also what version of Nifi are you using? Thanks
... View more
07-19-2023
06:29 AM
You are welcome. Have you changed anything in the jolt transformation spec, and if so how does the output looks like? Can you also provide the QueryRecord processor configuration.
... View more
07-18-2023
04:13 PM
1 Kudo
Hi @Paulito , The easiest way I can think of is to do this in two Processors: 1- JoltTransformJson: Allows you to transform your json by simplifying it into an array or records where each record has list of fieldname:fieldvalue. To achieve this you need to provide the following jolt spec in the "Jolt Specification" property of the processor: [
{
"operation": "shift",
"spec": {
"Result": {
"ResultData": {
"DATARECORD": {
"*": {
"DATAFIELD": {
"*": {
"FIELDVALUE": "[&3].@(1,FIELDNAME)"
}
}
}
}
}
}
}
}
] Basically the spec above will give you the following json based on the provided input: [ {
"name" : "JSMITH",
"namedesc" : "John Smith",
"hireddate" : "01-JAN-2010"
}, {
"name" : "RSTONE",
"namedesc" : "Robert Stone",
"hireddate" : "01-JAN-2011"
} ] 2- QueryRecord Processor: to allow you to select the fields you are interested in for the given API as follows. The query is just like sql query and you can either specify wildcard (*) for all fields or just list particular fields as follows: The out put of the QueryRecord will look like this: [
{
"name": "JSMITH",
"namedesc": "John Smith"
},
{
"name": "RSTONE",
"namedesc": "Robert Stone"
}
] Of course you can make this dynamic for each API by providing both the Jolt spec and the Query as flowfile attributes since both allow expression language (EL) in the value field. You can also achieve the same result by using just the Jolt Transformation processor but the spec would be more complex and I dont want to overwhelm you in case you are new to it, but if you are interested in that let me know and I will be happy to provide you the spec. If this helps please accept solution. Thanks
... View more
07-17-2023
12:06 PM
Hi @Madhav_VD , In addition to what @steven-matison has mentioned which will work, there is another option where you can capture errors through setting up "SiteToSiteBulletinReportingTask" which will channel all errors to an input port where you can capture the error information in json format from all processors and all nodes in cluster and then store that error in sql or send notification email. This option might require additional steps to set up but its worth it specially if you have cluster. For more info on how to set up the "SiteToSiteBulletinReportingTask" please refer to : https://pierrevillard.com/2017/05/13/monitoring-nifi-site2site-reporting-tasks/ Hope that helps
... View more
07-17-2023
11:54 AM
Hi @Paulito , How do you like to get the output all the names and namesdesc? Would like to get them as list in json, csv or any other format, or you want them to be store in flowfile attributes ?
... View more
06-16-2023
11:26 AM
You have to create a separate case statement for each column you are trying to update similar to what is done for the shipment number.
... View more
06-16-2023
11:14 AM
You either have to select each column one by one to have the same column name you are trying to update in the case statement, or select * but create different column for the case statement and then use jolt transformation to write the value back to the original column.
... View more