Member since
02-07-2019
2729
Posts
239
Kudos Received
31
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1458 | 08-21-2025 10:43 PM | |
| 2126 | 04-15-2025 10:34 PM | |
| 5486 | 10-28-2024 12:37 AM | |
| 2118 | 09-04-2024 07:38 AM | |
| 3993 | 06-10-2024 10:24 PM |
04-05-2024
12:25 AM
1 Kudo
Hello @ipson To edit the entity in Atlas, you need to export the entity, which will be downloaded in zip format, extract the files, edit the required files, zip the file in its original order, and then import the updated files to Atlas in compressed format (zip). You can make use of export and import atlas api's to perform the above steps
... View more
04-04-2024
10:40 PM
1 Kudo
@bhagi Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
04-04-2024
10:35 PM
1 Kudo
@felix_ Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
04-01-2024
06:35 AM
1 Kudo
Hi Vidya, thanks a lot. I will try to explain my problem a bit more in details. I'm importing data from an event based system. I'm using ListS3 and FetchS3Object to download parquet files from an AWS S3 bucket. In this bucket every entity has a separate directory which then is further split up by date of update of the entity. I'm using RouteOnAttribute to then route the data into the corresponding table of a MySQL database. The parquet files include updated records of the entites but it's not just the changes but the latest state of the entity so that I could ignore previous updates if it happens that I process a newer update before some older ones. The files on the bucket have some random name. It seems that ListS3 uses some alphabetical order and I also didn't see any way to order the files corresponding to the changed time of the S3 bucket. Every record contains a unique id of the entity and a timestamp which indicates the last update of the entity. Some of the entities also include a version number that I could use additionally or also instead of the timestamp. To put the data into the MySQL database so far I'm using PutDatabaseRecord with statement type UPSERT. My plan was to check for the latest update timestamp that is stored in the MySQL database. If no entry was found perform an insert, if an entry was found and the timestamp is older or the version number is lower than the one that is currently being processed then I would perform an update. If the entry in the database is already newer I would just skip this record. br, Stefan
... View more
04-01-2024
02:38 AM
1 Kudo
@Choolake, Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
04-01-2024
02:36 AM
1 Kudo
@frbelotto, @ZainK Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
03-28-2024
08:30 AM
Thank you @MattWho . You are awesome!
... View more
03-27-2024
12:30 PM
@jpalmer From the image you shared the bottleneck is actually in the custom non Apache NiFi out-of the-box PutGeoMesa 4.0.4 processor. A connection has backpressure settings to limit the amount of FlowFiles that can queue be queued (it is a soft limit which means back pressure gets applied once Connection backpressure threshold is reached or exceeded). Once backpressure is applied it will not be release until queue drops back below the configured thresholds. Backpressure when applied prevents the upstream processor from being scheduled to execute until that backpressure is removed. The connection turns red when backpressure is being applied and since the connection after PutGeoMesa 4.0.4 is not red, no backpressure is being applied on that processor. So you issue is the PutGeoMesa 4.0.4 is not able to process the FlowFiles being queued to it fast enough thus causing the backup in every upstream connection leading to the source processor. Since it is a custom processor I can't speak to its performance capabilities or tuning capabilities. I also don't know it the PutGeoMesa 4.0.4 processor will support concurrent executions either, but you could try: If you right click on the PutGeoMesa 4.0.4 processor and select configure, you can select the SCHEDULING tab. Within the Scheduling tab you can set "CONCURRENT TASKS". The default is 1 and this custom processor might ignore this property. What concurrent task does is allow the processor execute multiple times concurrently (so think of it as for each additional concurrent task, you are creating another identical processor). A processor component is scheduled to request a thread to execute base on the configured Run Schedule (for Timer Driven Scheduling Strategy the default 0 secs means schedule as fast as possible). So when it is scheduled the processor request a thread from the NiFi Timer Driven thread pool. That thread is then used to execute the processors code against a source connection FlowFile(s). The scheduler will the try to schedule it again based on run schedule and if concurrent task is still set to 1 and the previous execution is still running. it will not execute again until the in use thread finishes. But if you set concurrent tasks to say 3, the processor could potentially execute 3 threads concurrently (each thread working on different FlowFile(s) from source connection). Again I don't know if this custom processor will ignore this property or support it. Nor do I know if this processor was coded in a thread safe manor meaning that concurrent thread executions would not cause issues. so even if this appears to improve throughput, verify your data integrity coming out of the processor. Also keep in mind that adding concurrent tasks to a processor (especially a processor like this that appears to have long running threads. We can see it only processed 23 FlowFiles using 4.5 minutes of CPU time which is pretty slow) can quickly lead to this processor using all the available threads from the Max Timer Driven Thread pool resulting in other processors appearing to perform slower as they get an available thread to execute less often. You can increase the size of the Max Timer Driven Thread pool from the NiFi global menu in upper right corner, but need to do so carefully while monitoring CPU load average and memory usage as you slowly increase the setting. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-26-2024
11:57 AM
Hi @datafiber it seems like your Namenode is in Safe mode, not sure why it went into safe mode, but you can try taking it out manually and then retry the operation and monitor the logs. run the below commands from NN. # hdfs dfsadmin -safemode leave # hdfs dfsadmin -safemode status
... View more
03-26-2024
11:51 AM
Hi, @user2024 I don't the canary file is gonna cause this issue, the blocks that are corrupt/missing are now lost and cannot be recovered, you can manually delete those blocks by identifying them using the below command and run the hdfs balancer on HDFS so that NN will balance the new blocks across the cluster. # hdfs fsck -list-corruptfileblocks You can also refer to the below article. https://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hdfs-files
... View more