Member since
03-29-2023
52
Posts
32
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1340 | 04-05-2024 12:26 PM | |
1841 | 03-05-2024 10:53 AM | |
12143 | 04-03-2023 12:57 AM |
07-02-2024
01:00 PM
1 Kudo
@enam Have a slight mistake in my NiFi Expression Language (NEL) statement in my above post. Should be as follows instead: Property = filename
Value = ${filename:substringBeforeLast('.')}-${UUID()}.${filename:substringAfterLast('.')} Thanks, Matt
... View more
07-01-2024
10:27 AM
@NidhiPal09 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
06-27-2024
03:34 AM
1 Kudo
Hello, We you can try something like this. Step 1 : Add an InvokeHTTP Processor to Generate the Token Step 2: Extract Token Using EvaluateJsonPath Processor: Step 3: Use the Token in a InvokeHTTP Thanks,
... View more
06-20-2024
10:57 AM
Transferring Data from Multiple Tables in NiFi: NiFi provides processors that allow you to pull data from database tables using JDBC drivers. For Oracle, you can use processors like ExecuteSQL, QueryDatabaseTable, and GenerateTableFetch. To transfer data from multiple tables, consider the following approaches: Individual Flows for Each Table: You can create separate NiFi flows for each table. This approach is straightforward but may require more management. Dynamic SQL Generation: Use the ListDatabaseTables processor to list tables dynamically. Then, use ReplaceText to create SQL statements for each table (using NiFi Expression Language). Finally, send these statements to ExecuteSQL for fetching data. Parallel Fetching: If you have a NiFi cluster, route GenerateTableFetch into a Remote Process Group pointing at an Input Port on the same cluster. Automatic Table Creation in Cassandra Using Avro Schemas: To create tables in Cassandra, you can use the PutCassandraRecord processor. It allows you to put data directly into Cassandra without writing CQL. For schema management, consider using Avro schemas. You can define Avro schemas for your data and use them within your NiFi flow. To handle overwriting tables, you’ll need to manage the logic in your flow. Can Nifi load data from DB2 to Cassandra? - Stack Overflow
... View more
06-18-2024
01:24 PM
@omeraran If your source is continuously being written to you might consider using the GenerateTableFetch processor --> ExecuteSQLRecord processor (configured to use JsonRecordSetWriter) --> PutDatabaseRecord processor. Working with multi-record FlowFiles by utilizing the record based processor is going to be a more efficient and performant dataflow. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-11-2024
06:55 AM
@ranie I see a couple issues with your NiFi Expression Language (NEL) statement: I see some formatting issues in your java simple formatter string: 'yyyy-MM-dd\'T\'00:00:00\'Z\'. Your single and double quotes are not balanced. You are using the function "format ()" to change the timezone, but you could also use the "formatInstant()" function. You are missing the "toNumber()" function to convert the date string to a number before trying to apply a mathematically computation to it. The Now() function will return the date current system time as the NiFi service sees it. example: my NiFi server uses UTC timezone: The toNumber() function will provide the current date and time as a number of milliseconds since midnight Jan 1st, 1970 GMT. This number will always be a GMT value. The formatInstant() function will allow you to take a GMT time or a Java formatted date string and reformat it for a different timezone. Taking above feedback into consideration, the following NEL statement should work for you. ${now():toNumber():minus(86400000):formatInstant("yyyy-MM-dd'T'HH:mm:ss 'Z'", "CET")} Pay close attention to your use of single and double quotes. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-10-2024
06:51 AM
@udayAle Some NiFi Processors process FlowFiles one at a time and other may process batches of FlowFiles in a single thread execution. Then there are processors like the MergeContent and MergeRecord that allocate FlowFiles to bins and then only merges that bin once the min criteria is met to merge. With non merge type processors, a FlowFile that becomes results in a hung thread or long thread execution would block processing of FlowFiles next in queue. For Merge type processors, depending on data volumes and configuration 5 mins might be expected behavior (of your you could set a max bin age of 5 mins to force a bin to merge even if mins have not been satisfied). So i think there are two approaches to look at here. One monitors long running threads and the the other looks as failures. Runtime Monitoring Properties: When configured this background process checks for long running threads and produces log output and NiFi Bulletins when a thread exceeds a threshold. You could build an alerting dataflow around this using the SiteToSiteBulletinReportingTask, some Routing processors(to filter specific types of bulletins related to long running tasks) and then an email processor. The majority of processors that have potential for failures to occur will have a failure relationship. You can build a dataflow using that failure relationship to alert on those failures. Consider a failure relationship routed to an update attribute that use the advanced UI to increment a failure counter that then feeds a routeOnAttribute processor that handles routing base on number of failed attempts. After x number of failures it could send an email via putEmail. Apache NiFi does not have a background "Queued Duration" monitoring capability. Programmatically building one would be expensive resource wise. As you would need to monitor every single constantly changing connection and parse out and FlowFile with a "Queued Duration" in excess of X amount of time. Consider a Processor that is hung, the connection would continue to grow until backpressure kicks in and forces upstream processor to start queueing. You could end up with 10,000 FlowFiles alerting on queued duration. Hopefully this helps you maybe to look at the use case a little differently. Keep in mind that all monitoring including examples I provided will have impact on performance. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-05-2024
08:56 AM
Hi @mstfo , Firstly it seems that the output you have after the replacetext processor isn't really a json, because of the missing commas (' , ') between the fields. If your output is exactly like this, it will be necessary to add those missing commas so then you can get a json like this: [
{
"Name": "Gta V",
"Type": "Xyz",
"content": "{\"Game\":{\"Subject\":[{\"Time\":{\"@value\":\"201511021057\" }}]}}"
}
] To obtain a json like this you can either: try to change the logic when you get the data from the database; try to add the commas manually. The first option is the quicker solution, but there is the possibility that maybe you can't change the input you receive from the db, so perhaps you have to manually change it yourself. Anyway, in case you manage to obtain a correct json from your input, then you can manipulate the string using the functions in the beta operations in jolt (like the =split()) and then reassemble the wanted json. I wrote a guide and in the very last example i did, there is a similar case, so if you want go there and give it a look -> JOLT guide .
... View more
06-05-2024
12:31 AM
1 Kudo
hi SAMSAL, i send you message. Can you check it please ?
... View more