Member since
01-27-2023
229
Posts
73
Kudos Received
45
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
618 | 02-23-2024 01:14 AM | |
768 | 01-26-2024 01:31 AM | |
555 | 11-22-2023 12:28 AM | |
1244 | 11-22-2023 12:10 AM | |
1436 | 11-06-2023 12:44 AM |
11-01-2023
02:42 AM
@Wadok88, The problem you are reporting is not related to the database and how it works, but to how you configured your NiFi Instance and especially your ZooKeeper. First of all, are you using embedded Zookeeper or external Zookeeper ? Did you configure the state-management.xml file and the nifi.properties file with the correct connection string for your Zookeeper nodes? Secondly, when using NiFi in a cluster manner, the Zookeeper is used to maintain the state of some processors within the entire cluster, meaning that those processors will try and attempt to use the state manager, even though it was not configured -- or in your case not configured correctly. So what I suggest you to do is check the zookeeper configurations from within NiFi. Next, set you processor on debug and check if NodeX is able to retrieve the state using the ZK and so on. Maybe you have a connectivity issue from a specific node.
... View more
10-22-2023
11:11 PM
1 Kudo
@AhmedParvez, have a look on what I told you in the previous post. EOF means that most likely you are sending the API call somehow wrong and you need to have a look at it. Try making that same exact API call from Postman or from Python (or actually anything you know how to use) and see if you encounter the same problem. Unfortunately this is not a NiFi Problem but a configuration problem and as you are very reluctant in providing all the details, nobody will be able to help you with your request 😞 Set your processor on debug and see what you can get extra from there as well. Maybe it will provide you with the necessary info to find your error.
... View more
10-19-2023
03:50 AM
@Kiranq, What did you configure in UpdateRecord? Most likely you problem starts from there.
... View more
10-18-2023
11:53 PM
1 Kudo
@Fanxxx, How I would do the first POC: 1) GetMongoRecord: execute the count on the first table. Using the property "Query Output Attribute" you save that value directly as an attribute. 2) connected to the success queue another GetMongoRecord: execute the count on the second table. Using the property "Query Output Attribute" you save that value directly as an attribute. 3) connected to the success queue an RouteOnAttribute: here you define a rules --> if count1=count2, do what you want to do, otherwise call the logic for the insert, as you said. (using NiFi Expression language: ${attribute1:equals(${attribute2})} )
... View more
10-18-2023
11:37 PM
1 Kudo
@Hae, Well if you are going to use ExecuteSQL to execute the INSERT Statement based on the content of the file, from my point of view you should first extract the value of your content as an Attribute to your FlowFile. To do that, you add an ExtractText Processor, where you define a new property named "what_you_want_to_have_it_called" and assign it the value ".*". This will extract the content of your flow file and store it in the attribute you defined as a property. NOTE: if you are going to have lots of information stored in the content of the flowfile, you will encounter so many issues doing the above mentioned. Take into consideration that this is going to work properly if working with small flowfiles, exactly like you described in your post 🙂 Next, using ExecuteSQL, you are going to execute the insert like: "insert into table value(${what_you_want_to_have_it_called}). Another option which might not involve all these steps would be to use and PutDatabaseRecord Processor, in which you can define directly the action you are trying to perform and it will handle everything for you. NOTE: this is going to work ok for larger flowfiles, as you no longer need to extract the content as attributes and NiFi will handle everything on its own. The only single downside is that you will have to configure an Record Reader.
... View more
10-17-2023
07:23 AM
@MWM, when using toString(), it should stay on Record Path Value.
... View more
10-17-2023
07:07 AM
The value for the property certainly should be like toString(/YOUR_COLUMN,"UTF-8"), without those backslashes. Have a look in the documentation for the expression language and you shall see. As for how the data will look like, that is nothing you could change. Give it a try with the correct syntax, not the one that you wrote. Maybe the data in your bytes column is stored encrypted or in another format. You should further discuss this topic with the owner of the view and understand how the view is built and how the data is stored in the column. Without those information, it is hard to establish the perfect way to extract the data correctly. Another solution would be to use a python script and execute it on the content of the AVRO File to decode the bytes column into string and send the output forward into processing. (This is not something easy to implement, especially if you do not have sudo on the nifi machines).
... View more
10-17-2023
06:23 AM
1 Kudo
Open the Controller Service for AvroRecordSetWriter, and in the field Schema Access Strategy, switch from Inherit Record Schema to Use 'Schema Text' Property. Once you select this option, a new property will be added, named Schema text. In the Value field for this property add the AVRO Schema.
... View more
10-17-2023
05:39 AM
well there you go, UUID is bytes and not string. That is the reason why your data gets displayed like that, when transformed into JSON. You need to convert the bytes into CHAR when extracting the data from the view, if you need the data as string. What you could try to do is add an UpdateRecord processor, in which you define an AVRO Reader and an AVRO Writer. In the AVRO Writer you set the schema you mentioned above, just that instead of bytes for UUID_1 you will write string. Next, in the processor, you add a new property with the same name as you affected column, starting with "/". For the value, you use NiFi's Expression Language to transform the bytes into string: toString(/YOUR_COLUMN_IN_BYTES_FORMAT, "UTF-8"). If that works, you can modify the flow and instead of writing the data with the AVRO Writer, you can modify the flow and set JSON directly and skip the step for ConvertAvroToJson.
... View more
10-17-2023
04:57 AM
To see the content of the file, you can use NiFi. However, to see the generated schema, you will need an IDE like IntelliJ with the AVRO/Parquet plugin or you can search for an online avro reader and upload your data there.
... View more