Created on
03-17-2020
06:41 AM
- last edited on
03-17-2020
07:30 AM
by
cjervis
While trying to start ZKFC from Ambari, its failing with the below error:
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s
/bin/bash -c 'ulimit -c unlimited ; /usr/hdp/3.1.0.0-78/hadoop/bin/hdfs --config /usr/hdp/3.1.0.0-78/hadoop/conf
--daemon start zkfc'' returned 1. ERROR: Cannot set priority of zkfc process 2866
Does anybody know a solution,
Thanks in advance
Created 03-18-2020 02:21 AM
@ManjunathK Can you check the jars in the hadoop class /usr/hdp/3.1.0.0-78/hadoop-hdfs/ in the problematic node and see if there is any 0 bytes file. If yes, then copy all the jars from working node and started the services.
Created 04-19-2022 09:39 AM
I recently face this same issue on one of our environment.
Error:
returned 1. ERROR: Cannot set priority of zkfc process 24167
When digged more on the ZKFC Log, I found this error:
ERROR org.apache.hadoop.ha.ZKFailoverController: Unable to start failover controller. Parent znode does not exist.
I applied the solution to format ZKFC and once done I have restarted the services and it worked fine.
bin/hdfs zkfc -formatZK