Member since
07-29-2020
574
Posts
323
Kudos Received
176
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2116 | 12-20-2024 05:49 AM | |
| 2419 | 12-19-2024 08:33 PM | |
| 2162 | 12-19-2024 06:48 AM | |
| 1448 | 12-17-2024 12:56 PM | |
| 2063 | 12-16-2024 04:38 AM |
01-26-2024
02:09 AM
2 Kudos
Hi @SandyClouds , I ran into this issue before and after some research I found that when you do the ConvertJsonToSQL nifi assigns timestamp data type (value = 93 in the sql.args.[n].type attribute ). When the PutSQL runs the generated sql statement it will parse the value according to the assigned type and format it accordingly. However for timestamp it expects it to be in the format of "yyyy-MM-dd HH:mm:ss.SSS" so if you are missing the milliseconds in the original datetime value it will fail with the specified error message. To resolve the issue make sure to assign 000 milliseconds to your datetime value before running the PUTSQL processor. You can do that in the source Json itself before the conversion to SQL or after conversion to SQL using UpdateAttribute, by using the later option you have to know which sql.args.[n].value will have the datetime and do expression language to reformat. If that helps please accept solution. Thanks
... View more
01-26-2024
01:33 AM
2 Kudos
Hi @ALWOSABY , What is the value of the FF_Content? is it the entire JSON record ? if so - as it appears from the specified path - Why not use the EvaluateJsonPath to get whatever values that are needed and store as attributes by setting the Destination property to flowfile-attribute. See the following post to learn more: https://community.cloudera.com/t5/Support-Questions/How-to-handle-json-using-EvaluateJsonPath-processor-in-NiFi/m-p/295335 If that helps please accept solution. Thanks
... View more
01-21-2024
08:17 PM
1 Kudo
OK, I see what is happening here. The QueryRecord is not needed and its giving you the error because its expecting Json format but its getting an insert sql statement that is created from the ConvertJsonToSQL processor. Does the record get inserted after the PutSQL gets executed on the insert statement? If so then the new id should be written as flowfile attribute "sql.generate.key" and its not going to be part of the flowfile content. The QueryRecord is not needed here and its not used for this purpose. Each flowfile will have a content which is the actual data that it represents and attributes representing the metadata , when you list the flowfile click on the (i) icon in the first column and select the Attributes tab and it should be there with the new ID value. I got confused because the ConvertJsontoSQL is showing an error status in your screenshot
... View more
01-20-2024
08:49 PM
Can you provide screenshot of the ConvertAvroToJson processor? What is the output you are getting out of this processor?
... View more
01-18-2024
06:13 PM
what seems to be the issue? please provide more details . If you are getting any error messages please share
... View more
01-17-2024
09:25 PM
Hi @jarviszzzz , If you use the PutSQL process there is property called "Obtain Generated Keys" which is described as follows: "If true, any key that is automatically generated by the database will be added to the FlowFile that generated it using the sql.generate.key attribute. This may result in slightly slower performance and is not supported by all databases." So basically you dont have to do anything extra besides setting this property to true. The new id should be written back as flowfile attribute which will be called "sql.generate.key". The PutSQL is very flexible, you can convert the json to SQL using the ConvertJsonToSQL processor and then use the PutSQL without specifying anything in the SQL Statement property, keep in mind that if you choose the ConvertJsonToSQL approach the fields names should match the target table column names as well as the data types should be compatible. If the fields names dont match and\or you need some flexibility on how to insert the values, you can specify the sql insert statement in the Sql Statement property and use expression language to reference the different json field values but you need first to extract them into flowfile attributes using processor like EvaluateJsonPath and set the destination to flowfile attributes. If that helps please accept solution. Thanks
... View more
01-17-2024
09:05 PM
1 Kudo
Hi @MWM , The reason the order changes when the values are null is because they are added with the default spec after they were removed from the initial shift spec when the values are set to blank. There is no quick fix if you want to maintain the order. I can suggest couple of options: 1- Provide additional shift spec to the end of the above spec to re enforce the desired order. The problem with this is that you have to list all the fields as given in the new structure which tends to be challenging specially when you have a lot of fields and complex structure. 2- Handle the setting of blank values to null for the desired fields before applying the transformation. For that you can use groovy script to write custom code or take advantage of the UpdateRecord Processor and the powerful nifi record path engine that has a lot of built in functions which can help you do such thing easily. Not only both approaches should be easier than the first option because you dont have to list all fields (only one expected to be blank) but also this will simplify your jolt transformation downstream because you dont need to worry about blank values any more. Im not groovy expert , so I can show you how to do it via the UpdateRecord processor which would look like the following: Basically I listed the path for each of the desired fields in the input json and set the value to the following record path function: /fieldname[not(isEmpty(/fieldname))] which says give me the value of the given field with the condition that the value is not Empty. the isEmpty function returns true if the value is null or blank. If the condition is not met the returned value will always be null. Make sure to set the Replacement Value Strategy to Record Path Value. If that helps please accept solution. Thanks
... View more
01-14-2024
09:14 PM
2 Kudos
Hi @enam , The concept of templates has been removed in 2.0 per the following article: https://medium.com/cloudera-inc/getting-ready-for-apache-nifi-2-0-5a5e6a67f450 You can either use the Nifi registry to share files as the article suggest. You can also right click on a given process group and select Download Flow Definition which will save the process group in json format. If that helps please accept solution. Thanks
... View more
12-29-2023
08:06 AM
1 Kudo
Hi @MWM , The following worked for me: The GenerateFlwoFile has json content and an attribute flowfile_id with value 123 In the ReplaceText I replace everything with empty string: In the UpdateAttribute Im adding new Attribute new_attr with value of 555 The MergeConent is configured as follows: I'm using flowfile_id as Correlation Attribute Name. Also notice how I set the "Minimum Number of Entries" to 2 so that original flowfile will wait until the second ready. The result is the original flowfile content with the new added attribute. Another option to MergeRecord is you can use PutDistributedMapCache and FetchDistributedMapCache to store the original content into Cache, then do whatever is needed to get the new attributes and finally fetch the original content again , this will give you the original flowfile including the new attributes. The only caveat with this approach that you have to create two controller services: DistributedMapCacheClientService & DistributedMapCacheServer. Another issue with this DistributedMapCacheClientService is that you have to provide a server hostname which could be the same as your nifi node, however this produces a single point of failure specially when you have cluster. For more info: https://stackoverflow.com/questions/44590296/how-does-one-setup-a-distributed-map-cache-for-nifi If that helps please accept solution. Thanks
... View more
12-29-2023
05:42 AM
1 Kudo
Hi @Heeya8876 , I ran into the same situation , what worked for me is changing the setting in the nifi.properties from nifi.python.command=pythons to nifi.python.command=python Also make sure the python venv package is installed on your machine : python -m venv If that helps please accept solution Thanks
... View more