Member since
05-19-2018
23
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1770 | 09-25-2018 12:24 AM |
05-31-2022
05:29 AM
Came here via Google. Just for other people. NiFi does support multipart with the InvokeHTTP since a few releases: https://palindromicity.blogspot.com/2020/04/sending-multipart-form-data-with.html
... View more
05-13-2019
07:41 AM
There are no leftovers from the old version and as you mentioned the log4j JARS from NiFi can't be removed. So we are stuck here and I guess it will not get any better with newer driver versions :-S...
... View more
04-30-2019
12:04 AM
1 Kudo
We plan to switch on impala server side to the newest CDH 6.2.0 release, hence we would like to upgrade as well the Cloudera JDBC Connector from 2.6.4 to 2.6.9 (https://www.cloudera.com/documentation/other/connectors/impala-jdbc/Cloudera-JDBC-Driver-for-Impala-Release-Notes.pdf) . However as soon as we switch “ImpalaJDBC41.jar” to the new 2.6.9 version within Apache NiFi 1.9.2, we get the following stacktrace from NiFi if we start an ExecuteSQL processor. 2019-04-26 18:30:11,497 ERROR [Timer-Driven Process Thread-3] o.a.nifi.processors.standard.ExecuteSQL ExecuteSQL[id=5957fa4c-016a-1000-ffff-ffffcdfcd7bc] ExecuteSQL[id=5957fa4c-016a-1000-ffff-ffffcdfcd7bc] failed to process session due to java.lang.ExceptionInInitializerError; Processor Administratively Yielded for 1 sec: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError: null
at com.cloudera.impala.jdbc41.internal.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at com.cloudera.impala.jdbc41.internal.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at com.cloudera.impala.jdbc41.internal.apache.thrift.transport.TIOStreamTransport.<clinit>(TIOStreamTransport.java:38)
at com.cloudera.impala.hivecommon.api.TETSSLTransportFactory.createClient(Unknown Source)
at com.cloudera.impala.hivecommon.api.TETSSLTransportFactory.getClientSocket(Unknown Source)
at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory.createTransport(Unknown Source)
at com.cloudera.impala.hivecommon.api.ServiceDiscoveryFactory.createClient(Unknown Source)
at com.cloudera.impala.hivecommon.core.HiveJDBCCommonConnection.establishConnection(Unknown Source)
at com.cloudera.impala.impala.core.ImpalaJDBCConnection.establishConnection(Unknown Source)
at com.cloudera.impala.jdbc.core.LoginTimeoutConnection.connect(Unknown Source)
at com.cloudera.impala.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
at com.cloudera.impala.jdbc.common.AbstractDriver.connect(Unknown Source)
at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:53)
at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:291)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2395)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:470)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:87)
at com.sun.proxy.$Proxy133.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:222)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalStateException: Detected both log4j-over-slf4j.jar AND bound slf4j-log4j12.jar on the class path, preempting StackOverflowError. See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more details.
at com.cloudera.impala.jdbc41.internal.slf4j.impl.Log4jLoggerFactory.<clinit>(Log4jLoggerFactory.java:54)
... 43 common frames omitted For me this looks like a logging library log4j incompatibility between NiFi 1.9.2 and the impala JDBC driver starting from release 2.6.6. The driver works in other tools like Zeppelin or DBVisualizer. Would be great to see that working again. Any ideas how we get a fix or how we can fix this issue? Cheers Josef
... View more
Labels:
- Labels:
-
Apache Impala
11-05-2018
08:48 AM
I tried the alter command below in impala-shell 2.12.0 and kudu 1.7.0. However, I'm getting an error. My table is an external table in impala. The error message is strange. Of course the new table doesn't exists, I want to create it with the command... ALTER TABLE res_dhcp_int SET TBLPROPERTIES('kudu.table_name'='res_dhcp_int'); Query: ALTER TABLE res_dhcp_int SET TBLPROPERTIES('kudu.table_name'='res_dhcp_int') ERROR: TableLoadingException: Error loading metadata for Kudu table res_dhcp_int CAUSED BY: ImpalaRuntimeException: Error opening Kudu table 'res_dhcp_int', Kudu error: The table does not exist: table_name: "res_dhcp_int" is this a bug? EDIT: i just read IMPALA-5654, seems that with impala 2.12.0 this alter command doesn't work anymore! I need an alternative for that 😞
... View more
09-25-2018
11:37 PM
There is now a "Bugreport" for Impala: https://issues.apache.org/jira/browse/IMPALA-7618
... View more
09-25-2018
12:24 AM
Thanks to the getkudu Slack Channel I found a solution for my issue. Just in case someone else is facing this as well, just reorder the "VALUES" string and the "Number" in a way that you can use "<=" instead of ">="... Original: ALTER TABLE test_sql_drop DROP RANGE PARTITION VALUES >= 1536311717; Workaround: ALTER TABLE test_sql_drop DROP RANGE PARTITION 1536311717 <= VALUES; Cheers
... View more
09-24-2018
07:16 AM
Hi I can't remove the upper range partition of a kudu table and it seems to be because of the "greater than" sign... Can somebody tell me what I'm doing wrong? Example Table "test_sql_drop": HASH (flowEndDate, uniqueID) PARTITIONS 16,
RANGE (flowEndDate) (
PARTITION VALUES < 1535102117,
PARTITION 1535102117 <= VALUES < 1535188517,
PARTITION 1536138917 <= VALUES < 1536225317,
PARTITION 1536225317 <= VALUES < 1536311717,
PARTITION VALUES >= 1536311717
) The following query to remove the lower limit works like a charm: ALTER TABLE test_sql_drop DROP RANGE PARTITION VALUES < 1535102117; However, if I try the same with the upper limit, it doesn't work: ALTER TABLE test_sql_drop DROP RANGE PARTITION VALUES >= 1536311717;
java.sql.SQLException: [Cloudera][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: TStatus(statusCode:ERROR_STATUS, sqlState:HY000, errorMessage:AnalysisException: Syntax error in line 1:
...OP RANGE PARTITION VALUES >= 1536311717;
^
Encountered: >
Expected: COMMA
CAUSED BY: Exception: Syntax error
), Query: ALTER TABLE test_sql_drop DROP RANGE PARTITION VALUES >= 1536311717;.
at com.cloudera.impala.hivecommon.api.HS2Client.executeStatementInternal(Unknown Source)
at com.cloudera.impala.hivecommon.api.HS2Client.executeStatement(Unknown Source)
.... Pretty simple question, but now we stuck because we can't extend the upper limit of our kudu tables (at least with JDBC Impala Driver). Thanks in advance. Cheers
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache Kudu
08-31-2018
07:55 AM
Very old topic, but still valid. Do I understand it right, if we have no provenance enabled, it makes no sense to have the content repo archive enabled? If both is enabled, how can we recover a single flow, via the provenance window?
... View more
05-25-2018
01:53 AM
Hi
we are doing a PoC with Kudu and Impala. For testing purposes we are using as well Spark to read Parquet files from the local disk which is pretty easy:
val df_parquet1 = spark.read.format("parquet").
load("file:///work/testParquetGZ")
df_parquet1.createOrReplaceTempView("test_parquet1")
and then we are able to query it directly within spark:
%sql
select *
from test_parquet1
limit 100
I'm looking for a similar approach for Impala. Is it really a must that I have to load the Parquet files to a HDFS storage? Because in our case this makes no sense, we use mainly kudu, so the HDFS part is only there to get Impala running. Our idea is to store the Parquet on a big file share, but without HDFS, as it would generate additional overhead.
So my question, how can I access Parquet files with Impala from (local) disk without HDFS?
Cheers
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache Kudu
-
Apache Spark
-
HDFS
05-21-2018
11:50 AM
Thanks Todd! So the fact that we are using 8 (btw int32 -> 32byte per row) seems to cause this huge amount of meta data. To sum up, the 32 bytes plus aprox. 10 bits per row seems to be the gap between data_size and on_disk_size... Am I correct? Is there a compression for the composite key column? I thought there is no compression at all...
... View more
05-20-2018
02:34 AM
I've checked some kudu metrics and I found out that at least the metric "kudu_on_disk_data_size" shows more or less the same size as the parquet files. However the "kudu_on_disk_size" metrics correlates with the size on the disk. I've created a new thread to discuss those two Kudu Metrics. I hope somebody can explain the difference. New thread: https://community.cloudera.com/t5/Interactive-Short-cycle-SQL/Kudu-Metrics-kudu-on-disk-data-size-amp-kudu-on-disk-size/m-p/67454#M4527
... View more
05-20-2018
02:24 AM
Hi guys My question is related to the following two metrics: kudu_on_disk_data_size [Space used by this tablet's data blocks.] -> 1494MB kudu_on_disk_size [Size of this tablet on disk.] -> 3010 MB I've verified those two metrics for one example tablet. My question now, the kudu_on_disk_size makes sense in a way that this is what I see as well with "du" on linux. However, how is it possible that kudu_on_disk_size is in my example twice as big as kudu_on_disk_data_size? What kind of data is additionally saved on disk beside naked data? A small hint regarding the data on this tablet, I'm using a schema with 8 primary keys (all Integers) out of 21 columns. What I can say is, the kudu_on_disk_data_size metric size is more or less the same as the size for the same data in parquet format. At least that makes sense for me. Thanks in advance
... View more
Labels:
- Labels:
-
Apache Kudu
05-19-2018
03:02 PM
Hi guys we have done some tests and compared kudu with parquet. In total parquet was about 170GB data. Our issue is that kudu uses about factor 2 more disk space than parquet (without any replication). We have measured the size of the data folder on the disk with "du". The WAL was in a different folder, so it wasn't included. Below is my Schema for our table. column 0-7 are primary keys and we can't change that because of the uniqueness. We are working with Kudu 1.6.0. Any ideas why kudu uses two times more space on disk than parquet? Or is this expected behavior? We created about 2400 tablets distributed over 4 servers. Cheers
... View more
Labels:
- Labels:
-
Apache Kudu
05-19-2018
02:43 PM
Hi guys for one of our use cases we have about 30TB data compressed in parquet. we are testing now kudu and I'm asking myself how big one tablet should be to get best performance (write and query) out of it. Is there any recommendation eg. 1GB per tablet size? Because what we see is that as bigger the tablet gets as slower seems to be the inserting. However 1GB is way to small as we would need 15 servers (30'000GB / 2000 [max number of tablets per server as written in doc] -> 15) without taking into account the replication. Additionally the doc recommends not to use more than 100 servers... We are working with kudu 1.6.0. Thanks in advance
... View more
Labels:
- Labels:
-
Apache Kudu
04-25-2018
07:50 AM
sorry I can't help you with that. I have no knowledge about your certs and their certification path.
... View more
04-25-2018
06:14 AM
@Lawand Suraj: Certification Path is not a path on your disk, it is a problem with your certs within the keystore/truststore. Check my screenshot below. However my issue is still there.
... View more
02-23-2018
04:24 PM
I figured it out. The culprit were at least 2 custom processors. As soon as I delete them from the configuration/webgui, the CPU load goes down to more or less 0% and the error messages are gone. Will verify with the developper why that happens. Check out my Jira Ticket for more Information: https://issues.apache.org/jira/projects/NIFI/issues/NIFI-4905
... View more
02-23-2018
01:01 PM
Hi, we are actually working on a PoC with 8 nodes (HP BL460c Blades, 24 Cores, 44GB RAM) in a NiFi 1.5.0 cluster. Our configuration has about 170 processors and all of them are stopped. Even in the stopped state, we are constanstly getting the messages below for all nodes, not only for the primary node. Response time from nifi2-07.xyz.ch:8080 was slow for each of the last 3 requests made. To see more information about timing, enable DEBUG logging for org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator If you are on the root canvas, you feel that it takes a few seconds until it response after a refresh. We have already tuned the parameters below, but without any luck. After start restart of NiFi it is fine for a few minutes, but then the messages return. nifi.cluster.protocol.heartbeat.interval=15 sec
nifi.cluster.node.protocol.threads=40
nifi.cluster.node.protocol.max.threads=80
nifi.cluster.node.connection.timeout=60 sec
nifi.cluster.node.read.timeout=60 sec
Our root canvas is quite big and has a lot of Process Groups. Please check the Attachment. Any suggestion what we can do to solve the issue? Do we have to many Elements in one view, especially on the root view? Cheers
... View more
Labels:
- Labels:
-
Apache NiFi
02-21-2018
01:57 PM
yes I've added some custom JARs. But my whole setup is created with ansible and I've tested it with NiFi 1.4.0 and 1.5.0. The error above occurs only with NiFi 1.5.0 and only if SSL is enabled. But good point, I can skip the copy JAR part and try it with an out of the box NiFi installation. Will do that and give feedback. EDIT: you were right, my splunk jar (for logging nifi logs) caused the problem. I've removed it and now I don't see any error
... View more
02-21-2018
07:55 AM
Hi, I've just secured my NiFi 1.5 setup with SSL. I'm able to access the NiFi canvas via https and I've enabled the "nifi.remote.input.secure=true" parameter as well. My Problem is, as soon as I try to create a remote process group to any destination, doesn't matter if it exists or not - I'm instantly getting the error message "java.lang.NoSuchMethodError: org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder". The error doesn't change if I switch between RAW and HTTP. I really have no idea how to troubleshoot... 2018-02-21 08:32:17,422 ERROR [Remote Process Group b746f775-0161-1000-0000-0000688d391a Thread-1] org.apache.nifi.engine.FlowEngine A flow controller task execution stopped abnormally
java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoSuchMethodError: org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.setupClient(SiteToSiteRestApiClient.java:278)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getHttpClient(SiteToSiteRestApiClient.java:219)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1189)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1237)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.fetchController(SiteToSiteRestApiClient.java:419)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:394)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:361)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:346)
at org.apache.nifi.remote.StandardRemoteProcessGroup.refreshFlowContents(StandardRemoteProcessGroup.java:842)
at org.apache.nifi.remote.StandardRemoteProcessGroup.lambda$initialize$0(StandardRemoteProcessGroup.java:193)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
... 3 common frames omitted
2018-02-21 08:32:17,425 ERROR [Remote Process Group b746f775-0161-1000-0000-0000688d391a Thread-1] org.apache.nifi.engine.FlowEngine A flow controller task execution stopped abnormally
java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoSuchMethodError: org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.setupClient(SiteToSiteRestApiClient.java:278)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getHttpClient(SiteToSiteRestApiClient.java:219)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1189)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1237)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.fetchController(SiteToSiteRestApiClient.java:419)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:394)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:361)
at org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:346)
at org.apache.nifi.remote.StandardRemoteProcessGroup$InitializationTask.run(StandardRemoteProcessGroup.java:1177)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
... 3 common frames omitted
2018-02-21 08:32:17,956 INFO [Flow Service Tasks Thread-2] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@4bf5c8b8 // Another save pending = false
Beside of that, my cluster behaves normal, look like SSL works. Any help would be appreciated. Cheers
... View more
Labels:
- Labels:
-
Apache NiFi
02-02-2018
04:37 PM
Hi, I've just upgraded my lab cluster to NiFi 1.5 and I'm playing around with SSL and LDAP. We have created self signed certificates within our company and I've added the keys/certs to the correspondig truststore/keystore. The base for that was this topic: https://community.hortonworks.com/articles/17293/how-to-create-user-generated-keys-for-securing-nif.html However, the first time when I try to access the NiFi webgui with https, I'm getting the message below. 2018-02-02 14:36:31,822 WARN [Replicate Request Thread-2] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/current-user to nifi4-01.bblab.ch:8443 due to javax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
2018-02-02 14:36:31,827 WARN [Replicate Request Thread-2] o.a.n.c.c.h.r.ThreadPoolRequestReplicator
javax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)
at org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)
at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
at org.glassfish.jersey.internal.Errors.process(Errors.java:229)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)
at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:661)
at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:875)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Unknown Source)
at sun.security.ssl.SSLSocketImpl.fatal(Unknown Source)
at sun.security.ssl.Handshaker.fatalSE(Unknown Source)
at sun.security.ssl.Handshaker.fatalSE(Unknown Source)
at sun.security.ssl.ClientHandshaker.serverCertificate(Unknown Source)
at sun.security.ssl.ClientHandshaker.processMessage(Unknown Source)
at sun.security.ssl.Handshaker.processLoop(Unknown Source)
at sun.security.ssl.Handshaker.process_record(Unknown Source)
at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.net.www.protocol.https.HttpsClient.afterConnect(Unknown Source)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
at java.net.HttpURLConnection.getResponseCode(Unknown Source)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(Unknown Source)
at org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:390)
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:282)
... 14 common frames omitted
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(Unknown Source)
at sun.security.validator.PKIXValidator.engineValidate(Unknown Source)
at sun.security.validator.Validator.validate(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.validate(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(Unknown Source)
... 30 common frames omitted
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(Unknown Source)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(Unknown Source)
at java.security.cert.CertPathBuilder.build(Unknown Source)
... 36 common frames omitted
Is this normal behavior as we use self-signed certs? As I said, it occurs only once after a fresh start of my cluster. If I try to access the webpage again or do a refresh I can access the webgui and I can see the canvas. If I check my browser and the SSL certifcate in the address field, then I see a complete successful cert chain without any error (of course I had to import the root CA cert into my browser). openssl shows the public CA certs. [root@nifi4-01 cluster]# openssl s_client -connect nifi4-01.bblab.ch:8443
CONNECTED(00000003)
depth=1 C = ch, O = Swisscom, OU = intern, CN = SwisscomCore
verify error:num=19:self signed certificate in certificate chain
---
Certificate chain
0 s:/C=CH/ST=Bern/L=Worblaufen/O=Swisscom (Schweiz) AG/OU=LI/CN=*.bblab.ch
i:/C=ch/O=Swisscom/OU=intern/CN=SwisscomCore
1 s:/C=ch/O=Swisscom/OU=intern/CN=SwisscomCore
i:/C=ch/O=Swisscom/OU=intern/CN=SwisscomCore
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGHzCCBQegAwIBAgITewAElqQv+iz+xs5HkgABAASWpDANBgkqhkiG9w0BAQsF
ADBIMQswCQYDVQQGEwJjaDERMA8GA1UEChMIU3dpc3Njb20xDzANBgNVBAsTBmlu
dGVybjEVMBMGA1UEAxMMU3dpc3Njb21Db3JlMB4XDTE4MDIwMTEwNDA0MVoXDTIx
MDEzMTEwNDA0MVowczELMAkGA1UEBhMCQ0gxDTALBgNVBAgTBEJlcm4xEzARBgNV
BAcTCldvcmJsYXVmZW4xHjAcBgNVBAoTFVN3aXNzY29tIChTY2h3ZWl6KSBBRzEL
MAkGA1UECxMCTEkxEzARBgNVBAMMCiouYmJsYWIuY2gwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQCASdYU+Tx+6Z5IgKuaPk2LdLy34jYMoOwbnYI9Mgth
UzAc8eXXyxe82hM8yAd6svXL4K/t+Nn82y4HKEvkxCDTwrI0ZSE/TdLI0ddWyDyG
e8ErfaltSMmWoVPO93IwDVRZLz3KHlA5APWGzopvYkYNLL4s4Gm346t5X59efIZW
/cqFnR2e3jG00L722bvjIZrphq887BLAh8Ode/jmO+dpGgSgh6vLIqwFyUrRgL95
XF/uQYKH/lkaEq3JpMATYbeqX4ml2uACiHKQn4smnGZyxJ67XtEqVu4VMn3m5B8F
8E2c78uNuGnzE1DO28v5d0W4/MLm7OpzaTiW29mIs2uzAgMBAAGjggLVMIIC0TAd
BgNVHQ4EFgQUM+SzqbRHEw3xedTa+YoDkpdoEqswHwYDVR0jBBgwFoAUYEaL54+h
Y9HDkB8hymCAacZ7+70wggE3BgNVHR8EggEuMIIBKjCCASagggEioIIBHoaBs2xk
YXA6Ly8vQ049U3dpc3Njb21Db3JlLENOPVNTMDAyODQ1LENOPUNEUCxDTj1QdWJs
aWMlMjBLZXklMjBTZXJ2aWNlcyxDTj1TZXJ2aWNlcyxDTj1Db25maWd1cmF0aW9u
LERDPWl0cm9vdCxEQz1uZXQ/Y2VydGlmaWNhdGVSZXZvY2F0aW9uTGlzdD9iYXNl
P29iamVjdENsYXNzPWNSTERpc3RyaWJ1dGlvblBvaW50hjhodHRwOi8vU1MwMDI4
NDUuY29ycHJvb3QubmV0L0NlcnRFbnJvbGwvU3dpc3Njb21Db3JlLmNybIYsaHR0
cDovL2NybGNvcmUuc3dpc3Njb20uY29tL1N3aXNzY29tQ29yZS5jcmwwgb0GCCsG
AQUFBwEBBIGwMIGtMIGqBggrBgEFBQcwAoaBnWxkYXA6Ly8vQ049U3dpc3Njb21D
b3JlLENOPUFJQSxDTj1QdWJsaWMlMjBLZXklMjBTZXJ2aWNlcyxDTj1TZXJ2aWNl
cyxDTj1Db25maWd1cmF0aW9uLERDPWl0cm9vdCxEQz1uZXQ/Y0FDZXJ0aWZpY2F0
ZT9iYXNlP29iamVjdENsYXNzPWNlcnRpZmljYXRpb25BdXRob3JpdHkwDgYDVR0P
AQH/BAQDAgWgMDwGCSsGAQQBgjcVBwQvMC0GJSsGAQQBgjcVCIHf/32BsfJfgYEi
g7v2TYKu6GYPhZ62VYbK4nECAWQCAR0wHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsG
AQUFBwMCMCcGCSsGAQQBgjcVCgQaMBgwCgYIKwYBBQUHAwEwCgYIKwYBBQUHAwIw
DQYJKoZIhvcNAQELBQADggEBAHDmMNcko1+eRzqJS8IV95agKvhaXoXo9Xtb+81F
iDeELiPXg5CrRsY7i5rEdALHlN18ByuZ6wPLSk4LzuNR9qnv2DETJ3ImmiqEfKei
YiEzrOmh6A3nUEMC+ewZ/JoyKyVQCH5RMS0wuTUW4qPqGsvHEkKe5zsbW9KU+usq
3edaiDQY25/2h/J+b4t7JCMFV3lQDO6ipPcF2LzJ7qY+XdEH7RslfZty3vqM9njJ
Am7egRoUjHaMtaOV3gcOyK+XUpqPvR+WBrBu1NZKxJPqhwBeBC4AuvLNduudMsoq
mYMRdrGzkSg+XqIdYxf7awRZRY9m8GG3FbhqixG5E4p7xUk=
-----END CERTIFICATE-----
subject=/C=CH/ST=Bern/L=Worblaufen/O=Swisscom (Schweiz) AG/OU=LI/CN=*.bblab.ch
issuer=/C=ch/O=Swisscom/OU=intern/CN=SwisscomCore
---
Acceptable client certificate CA names
/DC=CH/DC=TAURI/CN=SwisscomDatacenterCore
/C=ch/O=Swisscom/OU=intern/CN=SwisscomCore
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: ECDSA+SHA512:RSA+SHA512:ECDSA+SHA384:RSA+SHA384:ECDSA+SHA256:RSA+SHA256:DSA+SHA256:ECDSA+SHA224:RSA+SHA224:DSA+SHA224:ECDSA+SHA1:RSA+SHA1:DSA+SHA1
Shared Requested Signature Algorithms: ECDSA+SHA512:RSA+SHA512:ECDSA+SHA384:RSA+SHA384:ECDSA+SHA256:RSA+SHA256:DSA+SHA256:ECDSA+SHA224:RSA+SHA224:DSA+SHA224:ECDSA+SHA1:RSA+SHA1:DSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3231 bytes and written 467 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-SHA256
Session-ID: 5A746D0F81D6C506ABC23A8FCE0D518521CCCA3EDC03C93B4B30447C83AD6DCC
Session-ID-ctx:
Master-Key: B6F3F4AC7C0626ECE3510AB233D2A01E642DD0B9235BDA46738C8D9BB1F104E5DDBFD2A9BD66032F544452F07E1226D5
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1517579535
Timeout : 300 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
---
^Xclosed
I just tried it with certificates generated by the nifi tls-toolkit, same behavior. I'm getting this error once after cluster restart. On NiFi 1.4 this wasn't the case.
... View more
Labels:
- Labels:
-
Apache NiFi