Member since
03-16-2024
7
Posts
3
Kudos Received
0
Solutions
12-08-2024
04:19 AM
Hi everyone, I am facing an issue while running a Sqoop import job. The process gets stuck at: INFO mapreduce.Job: map 0% reduce 0% The job does not progress further. Additionally, I see the following message in the logs: INFO conf.Configuration: resource-types.xml not found. INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. Despite this, there are plenty of available resources in the cluster, so resource allocation should not be the problem. I also have 3 healthy nodes. I’ve tried various troubleshooting steps, but I haven’t been able to resolve the issue. Could the "resource-types.xml not found" warning be causing this problem? If so, where should the resource-types.xml file be placed in a Cloudera Data Platform (CDP) setup? Any help or suggestions would be greatly appreciated!
... View more
Labels:
12-08-2024
03:45 AM
any ideas please !
... View more
12-06-2024
05:29 AM
1 Kudo
Hi all, When running a Sqoop import , the job gets stuck at INFO mapreduce.Job: map 0% reduce 0%, and the process does not progress further. Additionally, the following error message appears in the logs: INFO conf.Configuration: resource-types.xml not found. INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. Despite this error, there are still plenty of available resources in the cluster, so resource allocation should not be the issue. How can I resolve the "resource-types.xml not found" issue? Where should the resource-types.xml file be placed in CDP?
... View more
Labels:
- Labels:
-
Apache Sqoop
06-21-2024
05:20 AM
In my case the problem was related to storage issues.
... View more
06-19-2024
12:07 PM
1 Kudo
Hi everyone, We have a NiFi cluster with 3 nodes that was functioning fine until we encountered the following error. The cluster uses an embedded ZooKeeper for coordination. The error logs indicate issues with connection loss and leadership. Here are the relevant log entries: 2024-06-19 16:25:05,335 WARN [Process Cluster Protocol Request-25] o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from nifi01 due to org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling protocol message in response to message type: HEARTBEAT due to java.net.SocketException: Relais brisé (pipe) (Write failed) org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling protocol message in response to message type: HEARTBEAT due to java.net.SocketException: Relais brisé (pipe) (Write failed) at org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.dispatchRequest(SocketProtocolListener.java:186) at org.apache.nifi.io.socket.SocketListener$2$1.run(SocketListener.java:131) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) The cluster was operating normally before this issue arose. Now, it appears to be having trouble with leadership roles and communication between nodes. Questions: What could be causing this connection this problem? How can we troubleshoot and resolve this problem to restore normal cluster operations?
... View more
Labels:
- Labels:
-
Apache NiFi
06-19-2024
11:51 AM
1 Kudo
Hi everyone, I'm new to NiFi and I'm trying to ingest data from more than 20 tables in an Oracle database into Cassandra. I have a few questions regarding the process: Can I transfer data from all the tables at once, without having to process each table individually? If so, what is the best approach to achieve this in NiFi? Is there a method to automatically create tables in Cassandra using Avro schemas within the NiFi flow? This is particularly important because in some use cases, we need to overwrite the table instead of appending data. How can we handle this scenario efficiently?
... View more
Labels:
- Labels:
-
Apache NiFi
06-13-2024
02:40 PM
I have a NiFi cluster consisting of 3 nodes, and I secured the cluster using a single signed certificate for all nodes. However, I am encountering an error that I suspect might be due to using just one certificate. Error Details: - Logs: [Replicate Request Thread-25] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/current-user to nifi01:8443 due to javax.net.ssl.SSLPeerUnverifiedException: Hostname nifi01 not verified: certificate: sha256/*********/GessD8= DN: CN=nifi01 subjectAltNames: [nifi03,nifi02] 2024-06-13 17:34:07,555 WARN [Replicate Request Thread-25] o.a.n.c.c.h.r.ThreadPoolRequestReplicator javax.net.ssl.SSLPeerUnverifiedException: Hostname nifi01 not verified: certificate: sha256/************/GessD8= DN: CN=nifi01 subjectAltNames: [nifi03,nifi02] at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.kt:389) at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.kt:337) at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:209) at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:226) at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:106) at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:74) at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:255) at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201) at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:136) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:645) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:869) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Could using a single certificate for all three nodes(imported in truststore of all nodes) be causing this issue? Any guidance or best practices would be greatly appreciated.
... View more
Labels:
- Labels:
-
Apache NiFi