Member since
08-14-2023
19
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1238 | 01-31-2024 02:21 AM |
01-31-2024
02:21 AM
1 Kudo
Upon examination, implementing the queue as a First In First Out prioritizer and configuring the load balancing strategy to Partition by attribute with the kafka.partition attribute has proven effective in maintaining the order.
... View more
01-31-2024
02:17 AM
1 Kudo
I'm trying to use CaptureChangeMySQL for mariadb but I'm getting one record that includes only metadata from the database (the table has 10 records ) {"type":"commit","timestamp":1706628370000,"binlog_filename":"mysql-bin.000001","binlog_position":516,"database":"copy"} I'm wondering if CaptureChangeMySQL can function with MariaDB?
... View more
Labels:
- Labels:
-
Apache NiFi
11-13-2023
05:44 AM
It looks like the high heap on the primary was caused by too many open files,Is there a way to identify the processor that open many files ? We are using openjdk 17.0.7 ,nifi v 1.21.0 and custom processors . @MattWho wrote: @edim2525 GC kicks in around 80% of heap memory usage. You could certainly enable GC debug logging to verify that GC is executing. GC can only clean-up unused memory (Memory no linger being held by a process). I see you have three NiFi nodes. Are you only having Heap memory usage issues on the one node? I see the node with growing heap usage is the elected primary node. What processors do you have running as "primary node" execution? Does your primary node have a lot more queued FlowFiles than the other nodes? If you disconnect the primary node from your cluster which will force a new primary node to be elected, does the heap then start to grow on the new elected primary node? What version of NiFi are you running? What version of Java is your NiFi running with? Have you collected heap dumps and analyzed them to see where the heap usage is being used? Do you have any custom processors added to your NiFi? Are you using any scripting based processors where you have written your own code that is being executed within NiFi? Matt 2023-11-11 04:13:18,240 ERROR [Timer-Driven Process Thread-30] o.a.n.p.kafka.pubsub.PublishKafka_2_6 PublishKafka_2_6[id=c8d6883a-8b07-3d38-b28f-fc8263c83a39] Processing halted: yielding [1 sec] java.lang.IllegalStateException: Cannot complete publishing to Kafka because Publisher Lease was already closed at org.apache.nifi.processors.kafka.pubsub.PublisherLease.complete(PublisherLease.java:476) at org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6.onTrigger(PublishKafka_2_6.java:490) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1360) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) 2023-11-11 04:13:18,241 WARN [Timer-Driven Process Thread-30] o.a.n.controller.tasks.ConnectableTask Processing halted: uncaught exception in Component [PublishKafka_2_6[id=c8d6883a-8b07-3d38-b28f-fc8263c83a39]] java.lang.IllegalStateException: Cannot complete publishing to Kafka because Publisher Lease was already closed at org.apache.nifi.processors.kafka.pubsub.PublisherLease.complete(PublisherLease.java:476) at org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6.onTrigger(PublishKafka_2_6.java:490) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1360) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) 2023-11-11 04:13:18,247 ERROR [Listen to Bootstrap] org.apache.nifi.BootstrapListener Failed to process request from Bootstrap due to java.io.IOException: Too many open files java.io.IOException: Too many open files at java.base/sun.nio.ch.Net.accept(Native Method) at java.base/sun.nio.ch.NioSocketImpl.timedAccept(NioSocketImpl.java:711) at java.base/sun.nio.ch.NioSocketImpl.accept(NioSocketImpl.java:752) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:675) at java.base/java.net.ServerSocket.platformImplAccept(ServerSocket.java:641)
... View more
09-26-2023
07:48 AM
@MattWho My cluster is working with a single-user-authorizer . I tried your method and tested it on a running cluster with three nodes that configure as single-user-authorizer, I updated the three files (nifi.properties,login-identity-providers.xml,authorizers.xml) to work with LDAP configuration. When I restarted the first node (not primary or coordinator ), I got the following error messages in the log. 2023-09-26 11:20:34,441 ERROR [main] o.s.web.context.ContextLoader Context initialization failed
2023-09-26 11:50:19,381 ERROR [main] o.a.nifi.controller.StandardFlowService Failed to load flow from cluster due to: org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed to connect node to cluster because local flow controller partially updated. Administrator should disconnect node and review flow for corruption.
2023-09-26 11:50:19,595 ERROR [main] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for xxx:8443 -- Node disconnected from cluster due to org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed to connect node to cluster because local flow controller partially updated. Administrator should disconnect node and review flow for corruption. The LDAP configuration takes effect only after restarting all the nodes
... View more
08-27-2023
11:53 PM
Hi All , I'm trying to replicate data between two Kafka topics that are defined with cleanup policy - compact using NIFI 1.21.0 ,The problem is that null messages(tombstones) replicate with a wrong value (no value instead of null ),I'm using PublishKafka_2_6 to insert the data to the target topic. for example message key 666 . source topic edi_test_compact . target topic edi_test_compact2 nifi
... View more
Labels:
- Labels:
-
Apache NiFi
08-20-2023
01:06 AM
I appreciate the comprehensive response, Thanks .
... View more
08-16-2023
06:57 AM
I set up the authorizers.xml file as you suggested and it's working perfectly, Thank you very much @MattWho !!
... View more