Member since
05-30-2019
86
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1666 | 11-21-2019 10:59 AM |
03-17-2020
11:57 AM
Hi,
I have a hadoop environment that has 2 nifi nodes running. I am not able to connect to the nifi UI since yesterday and i try do restart one of the node without any success....
In in nifi-app.log, file seeing this as only Error level error:
2020-03-17 13:42:37,974 ERROR [main] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for ***-nf02.****.ca:9091 -- Node disconnected from cluster due to org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow.
Could please help?
... View more
Labels:
- Labels:
-
Apache NiFi
12-30-2019
02:55 PM
Hi, I Have a hadoop cluster 3.0.1 with 3 journalnodes, 1 nfsgateways node and 6 workernodes. I connected by ssh to the worker nodes today and realised by doing a "df -h" that one a the one local disk (/data/4) is around 94% used on every worker nodes whereas the others disk are between 50% and 65%... The HDFS status on the another hand is the following: Disk Usage (DFS Used) 44.77% 28.1 TB / 62.8 TB Disk Usage (Non DFS Used) 14.97% 9.4 TB / 62.8 TB Disk Remaining 40.26% 25.3 TB / 62.8 TB What are the the elements i should check to make sure that a full local disk won't create any issue?
... View more
Labels:
11-21-2019
10:59 AM
Hi @Shelton when you said authorizations.xml are you talking about authorizers.xml? the hadoop environment use ranger for the securioty and also is connected to a ldap server for the users and groups. I don't see any users.xml in the conf directory.
... View more
11-21-2019
09:08 AM
Hi, We are currently using HDP 3.0 with ambari and we installed 2 nifi nodes. We made some config changes on nifi node01 without restarting both nodes (i only restarted the node01 and not node02). The changes were not working properly so we decided to roll back to the previous configs but whenever i try to start node01 i am getting the following error: Failed to connect node to cluster because local flow is different than cluster flow. My guess would be that both nodes are out of synch.... How can we fix this issue? Thank you for your help.
... View more
Labels:
09-03-2019
01:28 PM
hi @nshawa, I am having the following error on PutHiveStreaming processor after running the template you provided: Any idea how to fix this?
... View more
08-19-2019
12:28 PM
@jsensharma I have realized that this host has a different java by default then the ambari host host: shell> java -version openjdk version "1.8.0_222" OpenJDK Runtime Environment (build 1.8.0_222-b10) OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode) ambari host: shell> java -version java version "1.8.0_112" Java(TM) SE Runtime Environment (build 1.8.0_112-b15) Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode) Could it be the root cause of this issue?
... View more
08-19-2019
11:07 AM
Hi @jsensharma Please find below the complete error trace including Caused By section: 2019-08-19 13:37:49,643 INFO localizer.ResourceLocalizationService (ResourceLocalizationService.java:run(997)) - Public cache exiting 2019-08-19 13:37:49,643 WARN nodemanager.NodeResourceMonitorImpl (NodeResourceMonitorImpl.java:run(167)) - org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl is interrupted. Exiting. 2019-08-19 13:37:49,645 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping NodeManager metrics system... 2019-08-19 13:37:49,646 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted. 2019-08-19 13:37:49,647 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - NodeManager metrics system stopped. 2019-08-19 13:37:49,647 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NodeManager metrics system shutdown complete. 2019-08-19 13:37:49,647 ERROR nodemanager.NodeManager (NodeManager.java:initAndStartNodeManager(936)) - Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: The per-node collector webapp failed to start. at org.apache.hadoop.yarn.server.timelineservice.collector.NodeTimelineCollectorManager.startWebApp(NodeTimelineCollectorManager.java:315) at org.apache.hadoop.yarn.server.timelineservice.collector.NodeTimelineCollectorManager.serviceStart(NodeTimelineCollectorManager.java:132) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService.serviceStart(PerNodeTimelineCollectorsAuxService.java:101) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceStart(AuxServices.java:313) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceStart(ContainerManagerImpl.java:643) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:934) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:1013) Caused by: java.io.IOException: Problem starting http server at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1165) at org.apache.hadoop.yarn.server.timelineservice.collector.NodeTimelineCollectorManager.startWebApp(NodeTimelineCollectorManager.java:311) ... 13 more Caused by: java.security.UnrecoverableKeyException: Get Key failed: Given final block not properly padded at sun.security.pkcs12.PKCS12KeyStore.engineGetKey(PKCS12KeyStore.java:410) at sun.security.provider.KeyStoreDelegator.engineGetKey(KeyStoreDelegator.java:96) at sun.security.provider.JavaKeyStore$DualFormatJKS.engineGetKey(JavaKeyStore.java:70) at java.security.KeyStore.getKey(KeyStore.java:1023) at sun.security.ssl.SunX509KeyManagerImpl.<init>(SunX509KeyManagerImpl.java:133) at sun.security.ssl.KeyManagerFactoryImpl$SunX509.engineInit(KeyManagerFactoryImpl.java:70) at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256) at org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1087) at org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:301) at org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:221) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113) at org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:72) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113) at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:268) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:401) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1134) ... 14 more Caused by: javax.crypto.BadPaddingException: Given final block not properly padded at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:989) at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:845) at com.sun.crypto.provider.PKCS12PBECipherCore.implDoFinal(PKCS12PBECipherCore.java:399) at com.sun.crypto.provider.PKCS12PBECipherCore$PBEWithSHA1AndDESede.engineDoFinal(PKCS12PBECipherCore.java:431) at javax.crypto.Cipher.doFinal(Cipher.java:2165) at sun.security.pkcs12.PKCS12KeyStore.engineGetKey(PKCS12KeyStore.java:348) ... 37 more 2019-08-19 13:37:49,651 INFO nodemanager.NodeManager (LogAdapter.java:info(51)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NodeManager at hd-prd-ll01.hadoop.com/**.*.**.** ************************************************************/
... View more
08-15-2019
08:33 PM
Hi, I am getting the following message on ambari on every worker nodes: 1/1 local-dirs usable space is below configured utilization percentage/no more usable space [ /data/2/hadoop/yarn/local : used space above threshold of 90.0% ] ; I have also checked on the ResourceManager UI > Nodes. 6 nodes are currently on a unhealthy state. Could you please help me get them back to a healthy state? Thank you
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache YARN
08-12-2019
07:45 PM
Hi
I am trying to start live the nodemanager previously decommisionned on a node but it stops soon after it starts...
ERROR collector.NodeTimelineCollectorManager (NodeTimelineCollectorManager.java:startWebApp(314)) - The per-node collector webapp failed to start.
java.io.IOException: Problem starting http server
any idea what could be the problem?
... View more
Labels:
- Labels:
-
Apache Hadoop
06-27-2019
12:51 AM
HI I have exported a table from a hadoop envrionment using the following command: export table department to 'hdfs_exports_location/department'; I tried to import the same table into another hadoop environment using the command: import from 'hdfs_exports_location/department'; i get the following error: Error: Error while compiling statement: FAILED: SemanticException [Error 10027]: Invalid path (state=42000,code=10027) i tried using import table imported_dept from 'hdfs_exports_location/department'; i get the following error: Error: Error while compiling statement: FAILED: SemanticException [Error 10324]: Import Semantic Analyzer Error (state=42000,code=10324) Any idea what could be the issue? i am using hive 3.1.0.3.0.1.0-187. Thank you
... View more
Labels:
- Labels:
-
Apache Hive
- « Previous
- Next »