Member since
01-27-2023
229
Posts
74
Kudos Received
45
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1775 | 02-23-2024 01:14 AM | |
| 2311 | 01-26-2024 01:31 AM | |
| 1441 | 11-22-2023 12:28 AM | |
| 3598 | 11-22-2023 12:10 AM | |
| 3682 | 11-06-2023 12:44 AM |
12-07-2023
09:50 AM
just to add some further infos: the value I am extracting from my Oracle database is 2012-05-21 23:59:35 and within the AVRO File (which reaches my UpdateRecord Processor) the value is 1337633975000. If we transform the value from the AVRO File, we will see that we are talking about 012-05-21 20:59:35 (UTC Timezone) and I need to as it is, in Europe/Bucharest. I have tried ${field.value:isNull():ifElse('0', ${field.value:toDate():format('yyyy-MM-dd HH:mm:ss','Europe/Bucharest')})} but I get java.lang.NumberFormatException: For input string: "2012-05-21 23:59:35" I have also tried ${field.value:isNull():ifElse('0', ${field.value:toDate('yyyy-MM-dd HH:mm:ss','Europe/Bucharest'):format('yyyy-MM-dd HH:mm:ss','Europe/Bucharest')})} but I get Cannot parse attribute value as a date; date format: yyyy-MM-dd HH:mm:ss; attribute value: 1337633975000 And the part where I keep the value the same as it is, if the original value is empty/null .... that I have not yet solved.
... View more
12-07-2023
08:34 AM
@MattWho , @stevenmatison , @SAMSAL : might you have some insight for this matter? I have been struggling with it for the last couple of days and I do not really know what to try next.
... View more
12-07-2023
08:32 AM
hi there, So I need your help with something. So I am extracting 4 columns out of a database. Out of these 4 columns, two columns are DATE in Oracle and I want to convert them into DATETIME (as they are going to be inserted into BigQuery). In addition to this, I am using an UpdateRecord to generate a new column and for that, I am using the following AVRO Schema: {
"namespace": "example.avro",
"type": "record",
"name": "my_relevant_table",
"fields": [
{
"name": "ID_REQUEST",
"type": [
"string",
"null"
]
},
{
"name": "ID_PROCESS",
"type": [
"string",
"null"
]
},
{
"name": "ARCHIVE_DATE",
"type": [
"null",
{
"type": "long",
"logicalType": "local-timestamp-millis"
}
]
},
{
"name": "EXEC_DATE",
"type": [
"null",
{
"type": "long",
"logicalType": "local-timestamp-millis"
}
]
},{
"name": "LOAD_DATE",
"type": [
"null",
{
"type": "int",
"logicalType": "date"
}
]
},
]
} Now, as you can imagine, I will encounter some null value within the ARCHIVE_DATE column, because this is a normal behavior. If I let the flow execute as it is, the AVRO File get's generated even with the NULL value. However, I would like to change something. If within that column I have a value, I would like to modify the value and assign it a Timezone ... as the actual value gets converted into UTC and I do not want that. How can I achieve this using the UpdateRecord Processor? I am pretty sure that I will have to use an ifElse statement but in terms of values, I have tried several things and all of the ended with an error: ${field.value:isNull():ifElse('0', ${field.value:toNumber():toDate('yyyy-MM-dd HH:mm:ss.SSS','Europe/Bucharest'):format('yyyy-MM-dd HH:mm:ss.SSS','Europe/Bucharest')})} The thing is that if I use '0', it works and assigns the value 0 if the field value is empty or null (the column contains no value, no space, not even the string null). However, if I replace '0' with '' I receive an error message for NumberConversion. In this case, how can I make sure that the values from within that column remain null/empty if they are like that and otherwise, apply the Europe/Bucharest timezone for that specific field value.
... View more
Labels:
- Labels:
-
Apache NiFi
11-22-2023
12:28 AM
@CommanderLaus: First things first, as I see in your netcat command, you are connecting to port 31510, whereas in your error message it seems that you are going on port 1025. Something is not right here and you need to check your configurations. Try a netscan on port 1025 as well and see if you have any connectivity. And besides netscan try using telnet as well. Next, regarding your DBCP Connection Pool, in the property Database Driver Location(s), I highly recommend you to write the full path to the JAR File and not using "." or any other shortcuts.
... View more
11-22-2023
12:10 AM
@Rohit1997jio, As @joseomjr already pointed out, by doing this, you defeat the single purpose of using Kafka. As you already know, Kafka is a stream-processing platform and basically function in a very basic way as a queue of messages. Integrating Kafka with NiFi, especially using the processor ConsumeKafka, you basically create a bucket at the end of the queue. As long as messages are present in the queue (Kafka in your case), you will have messages arriving in your bucket (you NiFi processing layer in your case). When you do not have any messages in the kafka system, your ConsumeKafka processor will be in a let's call it idle state, meaning that it will not waste resources in vain - it will however use some resources to check whether new messages arrived or not. That being said, I see no point in trying to kill a connection which is NOT affecting the involved systems in any way and basically defeats the entire purpose of using NiFi and Kafka. However, if you still want to achieve this, you will need to put some extra effort in doing this. First of all, you need to create a flow which checks the state of the desired processor using the NiFi's REST API - achievable in many ways, like InvokeHTTP, ExecuteStreamCommand, etc. If nothing has been done in the past 5 minutes (displayed in the JSON received as response from REST API) you will activate an InvokeHTTP in which you call again the REST API for STOPPING the ConsumeKafka Processor.
... View more
11-06-2023
12:44 AM
2 Kudos
@Chaitanya_, First of all, why the infinite loop? What is the point of that and what where you actually trying to achieve? ( just out of curiosity) Now, in terms of your problem, I can give you two possible scenarios which you could try, which might eventually help you: 1) Add a new processor on your Canvas (LogMessage for example) and try moving the queue from the center funnel towards the right funnel into the new processor. Please make sure that all your processors are stopped and disabled and your queues are empty. This should allow you to move the queue without any issues, while avoiding the infinite loop, which will eventually help you to remove the funnels from your canvas. Another thing, try also removing the purple highlighted queue, so you are certain that the loop is no longer a loop. Afterwards, you should be able to remove all the queues, starting from right to left. 2) This might be a little hard and requires plenty of attention (and it is not really recommended), but in times of desperation, you could try manually modifying the flow.xml.gz and the flow.json.gz and remove the parts of those funnels. You then can upload the new version of files in all NiFi nodes and you should no longer see those funnels on your canvas. However, before doing this, make sure that you create a backup of those files, in case you mess something up. Nevertheless, this is not really recommended so I highly advise you to try with the first solution before even trying this one. PS: make sure that all your nodes are up and running. Or stop the nodes, work on a single node and copy the flow.xml and flow.json to all the other nodes and start them. Hope it helps!
... View more
11-06-2023
12:28 AM
@user001, I do not know what your Sandbox environment is, but I just tested with NiFi 1.15 and the solution works like a charm. So either you are doing something wrong, or you have something configured incorrectly in your sandbox environment. Or maybe NiFI 1.18 is bugged - but I highly doubt it, as there would have been far more posts reporting similar issues.
... View more
11-02-2023
02:01 AM
1 Kudo
@user001, How I would do this: - Create an UpdateRecord Processor, where I define a JsonTreeReader, for reading the input file and a JsonRecordSetWriter, for writing the newly formatted file. - Within this same processor, I would add a new property where I define the path to the column you are trying to modify. In your case, assuming that this is the entire JSON File, you would have /Key/value[*]/Date. - The value for this newly generated property should be the transformation you are trying to achieve. In your case, it would be ${field.value:toDate("MM/yy"):format("yyyy-MM")} - Next, within the same exact processor, I would modify the Replacement Value Strategy from the default value into Literal Value. And that's pretty much all you have to do.
... View more
11-01-2023
02:42 AM
@Wadok88, The problem you are reporting is not related to the database and how it works, but to how you configured your NiFi Instance and especially your ZooKeeper. First of all, are you using embedded Zookeeper or external Zookeeper ? Did you configure the state-management.xml file and the nifi.properties file with the correct connection string for your Zookeeper nodes? Secondly, when using NiFi in a cluster manner, the Zookeeper is used to maintain the state of some processors within the entire cluster, meaning that those processors will try and attempt to use the state manager, even though it was not configured -- or in your case not configured correctly. So what I suggest you to do is check the zookeeper configurations from within NiFi. Next, set you processor on debug and check if NodeX is able to retrieve the state using the ZK and so on. Maybe you have a connectivity issue from a specific node.
... View more
10-22-2023
11:11 PM
1 Kudo
@AhmedParvez, have a look on what I told you in the previous post. EOF means that most likely you are sending the API call somehow wrong and you need to have a look at it. Try making that same exact API call from Postman or from Python (or actually anything you know how to use) and see if you encounter the same problem. Unfortunately this is not a NiFi Problem but a configuration problem and as you are very reluctant in providing all the details, nobody will be able to help you with your request 😞 Set your processor on debug and see what you can get extra from there as well. Maybe it will provide you with the necessary info to find your error.
... View more