Member since
01-27-2023
229
Posts
73
Kudos Received
45
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
661 | 02-23-2024 01:14 AM | |
850 | 01-26-2024 01:31 AM | |
583 | 11-22-2023 12:28 AM | |
1327 | 11-22-2023 12:10 AM | |
1529 | 11-06-2023 12:44 AM |
12-07-2023
09:50 AM
just to add some further infos: the value I am extracting from my Oracle database is 2012-05-21 23:59:35 and within the AVRO File (which reaches my UpdateRecord Processor) the value is 1337633975000. If we transform the value from the AVRO File, we will see that we are talking about 012-05-21 20:59:35 (UTC Timezone) and I need to as it is, in Europe/Bucharest. I have tried ${field.value:isNull():ifElse('0', ${field.value:toDate():format('yyyy-MM-dd HH:mm:ss','Europe/Bucharest')})} but I get java.lang.NumberFormatException: For input string: "2012-05-21 23:59:35" I have also tried ${field.value:isNull():ifElse('0', ${field.value:toDate('yyyy-MM-dd HH:mm:ss','Europe/Bucharest'):format('yyyy-MM-dd HH:mm:ss','Europe/Bucharest')})} but I get Cannot parse attribute value as a date; date format: yyyy-MM-dd HH:mm:ss; attribute value: 1337633975000 And the part where I keep the value the same as it is, if the original value is empty/null .... that I have not yet solved.
... View more
12-07-2023
08:42 AM
@SowmyaP : I encountered some similar issues and it was always related to the HEAP Memory and the CPU consumption. To solve the problem, I stopped the entire NiFi Cluster and started it all over again, making sure that the processors are all in STOPPED state (see Nifi.conf file from /conf for the property). Once you start your cluster back, make sure that you delete the file from the queue, to avoid future problems.
... View more
12-07-2023
08:34 AM
@MattWho , @stevenmatison , @SAMSAL : might you have some insight for this matter? I have been struggling with it for the last couple of days and I do not really know what to try next.
... View more
12-07-2023
08:32 AM
hi there, So I need your help with something. So I am extracting 4 columns out of a database. Out of these 4 columns, two columns are DATE in Oracle and I want to convert them into DATETIME (as they are going to be inserted into BigQuery). In addition to this, I am using an UpdateRecord to generate a new column and for that, I am using the following AVRO Schema: {
"namespace": "example.avro",
"type": "record",
"name": "my_relevant_table",
"fields": [
{
"name": "ID_REQUEST",
"type": [
"string",
"null"
]
},
{
"name": "ID_PROCESS",
"type": [
"string",
"null"
]
},
{
"name": "ARCHIVE_DATE",
"type": [
"null",
{
"type": "long",
"logicalType": "local-timestamp-millis"
}
]
},
{
"name": "EXEC_DATE",
"type": [
"null",
{
"type": "long",
"logicalType": "local-timestamp-millis"
}
]
},{
"name": "LOAD_DATE",
"type": [
"null",
{
"type": "int",
"logicalType": "date"
}
]
},
]
} Now, as you can imagine, I will encounter some null value within the ARCHIVE_DATE column, because this is a normal behavior. If I let the flow execute as it is, the AVRO File get's generated even with the NULL value. However, I would like to change something. If within that column I have a value, I would like to modify the value and assign it a Timezone ... as the actual value gets converted into UTC and I do not want that. How can I achieve this using the UpdateRecord Processor? I am pretty sure that I will have to use an ifElse statement but in terms of values, I have tried several things and all of the ended with an error: ${field.value:isNull():ifElse('0', ${field.value:toNumber():toDate('yyyy-MM-dd HH:mm:ss.SSS','Europe/Bucharest'):format('yyyy-MM-dd HH:mm:ss.SSS','Europe/Bucharest')})} The thing is that if I use '0', it works and assigns the value 0 if the field value is empty or null (the column contains no value, no space, not even the string null). However, if I replace '0' with '' I receive an error message for NumberConversion. In this case, how can I make sure that the values from within that column remain null/empty if they are like that and otherwise, apply the Europe/Bucharest timezone for that specific field value.
... View more
Labels:
- Labels:
-
Apache NiFi
11-22-2023
12:28 AM
@CommanderLaus: First things first, as I see in your netcat command, you are connecting to port 31510, whereas in your error message it seems that you are going on port 1025. Something is not right here and you need to check your configurations. Try a netscan on port 1025 as well and see if you have any connectivity. And besides netscan try using telnet as well. Next, regarding your DBCP Connection Pool, in the property Database Driver Location(s), I highly recommend you to write the full path to the JAR File and not using "." or any other shortcuts.
... View more
11-22-2023
12:19 AM
@Fanxxx, the first question would be if your GetMongo writes the flowfile in a queue when nothing was found in the DB, or does it log something in the Bulletin Board? If a flowfile gets generated and sent to the failed queue, you can link that queue to your next processor and using the NiFi's Expression Language, you can perform any action you desire. However, if nothing gets sent in the failure queue, you will need to create something else. You would need an InvokeHTTP in which you call NiFi' REST Api and extract the Bulletin Board Errors. You then filter out the messages generated by your GetMongo Processor (using it's unique ID) and proceed by extracting what you need out of it. If within your error message you will have all your necessary information, you can extract that information and save it as attributes, send then the flowfile to further processing and process it using NiFi's Expression Language. If the required information is not present in your error message, you will need to extract the query you tried to perform and extract the required information from there. Next, you will basically use the same logic overall, extract the information as attributes and send them down the stream to further processing using NiFi's EL. NiFi's Expression Language: https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html NiFi's REST API: https://nifi.apache.org/docs/nifi-docs/rest-api/index.html
... View more
11-22-2023
12:10 AM
@Rohit1997jio, As @joseomjr already pointed out, by doing this, you defeat the single purpose of using Kafka. As you already know, Kafka is a stream-processing platform and basically function in a very basic way as a queue of messages. Integrating Kafka with NiFi, especially using the processor ConsumeKafka, you basically create a bucket at the end of the queue. As long as messages are present in the queue (Kafka in your case), you will have messages arriving in your bucket (you NiFi processing layer in your case). When you do not have any messages in the kafka system, your ConsumeKafka processor will be in a let's call it idle state, meaning that it will not waste resources in vain - it will however use some resources to check whether new messages arrived or not. That being said, I see no point in trying to kill a connection which is NOT affecting the involved systems in any way and basically defeats the entire purpose of using NiFi and Kafka. However, if you still want to achieve this, you will need to put some extra effort in doing this. First of all, you need to create a flow which checks the state of the desired processor using the NiFi's REST API - achievable in many ways, like InvokeHTTP, ExecuteStreamCommand, etc. If nothing has been done in the past 5 minutes (displayed in the JSON received as response from REST API) you will activate an InvokeHTTP in which you call again the REST API for STOPPING the ConsumeKafka Processor.
... View more
11-06-2023
12:44 AM
2 Kudos
@Chaitanya_, First of all, why the infinite loop? What is the point of that and what where you actually trying to achieve? ( just out of curiosity) Now, in terms of your problem, I can give you two possible scenarios which you could try, which might eventually help you: 1) Add a new processor on your Canvas (LogMessage for example) and try moving the queue from the center funnel towards the right funnel into the new processor. Please make sure that all your processors are stopped and disabled and your queues are empty. This should allow you to move the queue without any issues, while avoiding the infinite loop, which will eventually help you to remove the funnels from your canvas. Another thing, try also removing the purple highlighted queue, so you are certain that the loop is no longer a loop. Afterwards, you should be able to remove all the queues, starting from right to left. 2) This might be a little hard and requires plenty of attention (and it is not really recommended), but in times of desperation, you could try manually modifying the flow.xml.gz and the flow.json.gz and remove the parts of those funnels. You then can upload the new version of files in all NiFi nodes and you should no longer see those funnels on your canvas. However, before doing this, make sure that you create a backup of those files, in case you mess something up. Nevertheless, this is not really recommended so I highly advise you to try with the first solution before even trying this one. PS: make sure that all your nodes are up and running. Or stop the nodes, work on a single node and copy the flow.xml and flow.json to all the other nodes and start them. Hope it helps!
... View more
11-06-2023
12:28 AM
@user001, I do not know what your Sandbox environment is, but I just tested with NiFi 1.15 and the solution works like a charm. So either you are doing something wrong, or you have something configured incorrectly in your sandbox environment. Or maybe NiFI 1.18 is bugged - but I highly doubt it, as there would have been far more posts reporting similar issues.
... View more
11-02-2023
02:01 AM
1 Kudo
@user001, How I would do this: - Create an UpdateRecord Processor, where I define a JsonTreeReader, for reading the input file and a JsonRecordSetWriter, for writing the newly formatted file. - Within this same processor, I would add a new property where I define the path to the column you are trying to modify. In your case, assuming that this is the entire JSON File, you would have /Key/value[*]/Date. - The value for this newly generated property should be the transformation you are trying to achieve. In your case, it would be ${field.value:toDate("MM/yy"):format("yyyy-MM")} - Next, within the same exact processor, I would modify the Replacement Value Strategy from the default value into Literal Value. And that's pretty much all you have to do.
... View more