I think I see what is going on.
Your host names are hadoop.pre01, hadoop.pre02, etc. They are not fully qualified domain names.
I believe a few places in the code, it is assuming the hostnames look like hostname1.domain.com, hostname2.domain.com.
It assumes the first part of the hostname is unique, except in this case, it is not.
Thank you for your replay. Recently , I will change hostname and replay you after that.
@yuannit Let me know more about the error and share your error log
how are you configuring ?
are you manaing your cluster using Cloudera manager ? or command line .
did you install all the pre-requists daemons for your HA HDFS ?
Thank you very much for your reply
Yes my cluster is managed using cloudera-manager
I am by downloading a series of cloudera-manager-service.rpm files installed locally in the yum repo
Now cluster all ok but only HDFS ha I can not install through cloudera-manager successfuly, I try to upgrade cm from 5.11.1 to 5.12 But still encounter this problem, I think this problem is not incidental,
I follow the official tutorial after the installation of the correct HDFS service and then click on the high available, fill in the JournalNode edits dir only show a line of input box, I checked the log files are no abnormal information,
I fixed the problem.
I manually edit the JournalNode Edits Directory field and entered '/mnt/disk1/dfs/jnn' value then I was able to create HA without any issue.
You host file should like something like this
/etc/hosts 192.168.200.11 master
/etc/sysconfig/network HOSTNAME=master NETWORKING =yes
Do you still have secondarynamenode hanging around in your HA cluster ? you have mentioned in your previous post