We had a 5 node cluster (2 master and 3 slave nodes). Recently we added 4 more slave nodes. all of this is through ambari. One strange thing we noticed is in all the existing 5 nodes , the /etc/hadoop/conf/slaves and /etc/hbase/conf/regionservers didnt get the entries of the newly added nodes (they have only the 3 slave entries). but the newly added nodes have all the entries for all the 7 hosts. why is it so?
I believe this will cause an issue when we restart. start/stop the hbase service.
but amabri dashboard shows all 7 slaves and we are able to run jobs on all 7 slaves too.
As @Enis clarified, there will be no impact on your start/stop. If you have custom scripts then you may want fix those files to show all slaves. I have no explanation yet of why the files in the existent slave nodes did not get updated, but you are safe for start/stop.