Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

hi, I deleted ZKFailOverController service from ambari for one of the nameNode out of two. Please help me to bring the service back.

avatar
 
1 ACCEPTED SOLUTION

avatar
Master Mentor

priyanshu bindal

Can you try adding the ZKFC on that host using the ambari APIs?

Example:

curl --user admin:admin -i -X POST http://erie1.example.com:8080/api/v1/clusters/ErieCluster/hosts/erie2.example.com/host_components/ZK...

- Here

erie1.example.com = Ambari Host Name

ErieCluster = It is the cluster Name

erie2.example.com = It is the Host name in which we want to install the ZKFC

.

- You can find more details on how to add host component using Ambari APIs by referring to the following link: https://cwiki.apache.org/confluence/display/AMBARI/Add+a+host+and+deploy+components+using+APIs

.

View solution in original post

4 REPLIES 4

avatar
Master Mentor

priyanshu bindal

Can you try adding the ZKFC on that host using the ambari APIs?

Example:

curl --user admin:admin -i -X POST http://erie1.example.com:8080/api/v1/clusters/ErieCluster/hosts/erie2.example.com/host_components/ZK...

- Here

erie1.example.com = Ambari Host Name

ErieCluster = It is the cluster Name

erie2.example.com = It is the Host name in which we want to install the ZKFC

.

- You can find more details on how to add host component using Ambari APIs by referring to the following link: https://cwiki.apache.org/confluence/display/AMBARI/Add+a+host+and+deploy+components+using+APIs

.

avatar

Thanks Jay:

Following command helped me

curl -u admin:admin -H "X-Requested-By: ambari" -i -X POST http://erie1.example.com:8080/api/v1/clusters/ErieCluster/hosts/erie2.example.com/host_components/ZK...

But after installing it, its stopping and i am not able to see logs also under /var/log/hadoop/hdfs. Could u plz guide what could be the possible reason. While on the other nameNode, ZKFailOverController is working fine.

avatar
Master Mentor

Can you please try doing a "ps -ef | grep zkfc" as following :

# ps -ef | grep zkfc
hdfs     25692     1  0 09:23 ?        00:00:20 /usr/jdk64/jdk1.8.0_60/bin/java -Dproc_zkfc -Xmx1024m -Dhdp.version=2.5.0.0-1133 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1133/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.5.0.0-1133/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1133/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.5.0.0-1133 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-zkfc-erie1.example.com.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1133/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.5.0.0-1133/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1133/hadoop/lib/native:/usr/hdp/2.5.0.0-1133/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1133/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.tools.DFSZKFailoverController

- And in the output can you please check what log file path it is taking specially for the following two properties:

-Dhadoop.log.dir=/var/log/hadoop/hdfs 
-Dhadoop.log.file=hadoop-hdfs-zkfc-erie1.example.com.log

Is it same?

avatar

yes it is same as u mentioned.

I switched user to hdfs using

su hdfs

then gave jps, surprisingly DFSZKFailoverController was running, so killed the process using kill -9 <pid>,

then tried starting it again from ambari, but its again stopping and no logs again getting logged.