Member since
04-07-2017
6
Posts
0
Kudos Received
0
Solutions
12-06-2021
05:14 PM
Hello Experts. We have HDP 2.5.0 cluster and recently migrated JDK from Oracle to OpenJDK. Now after rebooting servers, Ambari does not start. The error in log file is: "02 Dec 2021 20:16:04,161 INFO [main] CertificateManager:76 - Certificate exists:true 02 Dec 2021 20:16:04,161 INFO [main] KerberosChecker:128 - Skipping Ambari Server Kerberos credentials check. 02 Dec 2021 20:16:04,164 ERROR [main] AmbariServer:1017 - Failed to run the Ambari Server java.lang.NoClassDefFoundError: Could not initialize class javax.crypto.JceSecurity at javax.crypto.SecretKeyFactory.nextSpi(SecretKeyFactory.java:295) at javax.crypto.SecretKeyFactory.<init>(SecretKeyFactory.java:121) at javax.crypto.SecretKeyFactory.getInstance(SecretKeyFactory.java:160)" We guess we missed to run "ambari-server setup" to reconfigure the new JDK. As we didn't it before and want to know if there is any risk of loosing data after the setup. Some tip or advise? Regards
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
10-01-2020
04:24 PM
Hi steven. Thanks for the quick response. I'm running this HDP cluster in SUSE 12 SP2. This node has 32 GB RAM and using just 4. Free RAM is 27 GB. Yarn Configuration is like this: ResourceManager Java heap size = 2048 NodeManager Java heap size = 1024 AppTimelineServer Java heap size = 8072 ulimit used by RM process is: core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128615 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited --- From RM log file: 2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:getMinimumAllocation(1367)) - Minimum allocation = <memory:1024, vCores:1> 2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:getMaximumAllocation(1379)) - Maximum allocation = <memory:24576, vCores:3 No matter how much memory is assigned to RM, it always fails with this Jana OoM. What may be a recommended Java Memory configuration for Yarn components?
... View more
09-28-2020
03:26 PM
Hello. I'm running HDP 3.1.0 and Yarn resource Manager does not start, always crash with the below error, no matter the amount of Heap memory specified: 2020-09-28 15:12:57,738 INFO ipc.Server (Server.java:run(1074)) - Starting Socket Reader #1 for port 8030 2020-09-28 15:12:57,747 INFO pb.RpcServerFactoryPBImpl (RpcServerFactoryPBImpl.java:createServer(173)) - Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server 2020-09-28 15:12:57,748 FATAL resourcemanager.ResourceManager (ResourceManager.java:main(1516)) - Error starting ResourceManager java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.ipc.Server.start(Server.java:3070) at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.serviceStart(ApplicationMasterService.java:210) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:869) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) Please, any tip will be appreciated. Thanks
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)