We have deployed Hortonworks 184.108.40.206 on 7 servers on our local network. During the initial configuration, Ambari by default had the following two directories listed as the storage locations for hdfs namenode/data directories:
Curiously, this default value was giving an immediate error that /home paths should not be used for storing the hdfs directories. Obligingly, we removed the /home filepaths, so that the actually configured locations are only:
This configured just fine, and we can verify in Ambari that these are the settings that actually went through.
Now any time we attempt to modify service configurations, the HDFS tab updates its recommendations to include those two /home filepaths that were previously removed, then throws an error that "/home paths should not be used" and refuses to continue until we remove those paths. However, as soon as we delete both of the /home filepaths, Ambari puts them right back in without any warning. It's like playing whack-a-mole with the UI.
Has anybody experienced this before? How do we stop Ambari's configuration screen from putting in the default values over and over?
Thanks for reporting this issue.
From the description, this issue looks to be an ambari bug. can you please create Apache Ambari Jira for this ? Let us know on this thread when you create bug. We will look into it and address it in the next ambari release.
Also you mentioned in your description that this issue was noticed on HDP-220.127.116.11 and the tag to this question is of ambari-2.2.0
But ambari-2.2.0 does not support HDP-18.104.22.168. can you please verify these version related information for both HDP and Ambari ?
For now, to get around this issue, you will need to edit a file on ambari-server host at location /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/stack_advisor.py.
Comment out config items for now in this file at code link
Ambari will then not recommend any changes to these configs whenever you attempt to change any other configs on your cluster
Sorry, that must have been a mis-click. I just wanted to tag it with a generic "ambari" tag. We have ambari 22.214.171.124 and I have confirmed hadoop's version is 126.96.36.199. We cannot upgrade to 188.8.131.52 at this time due to an incompatibility with SAS
We managed to solve this issue through our own investigation. I have not seen any similar reports of this error, so I'll record what happened in case somebody in the future gets stuck on something simila
On this particular deployment, we initially deployed version 184.108.40.206, then had to downgrade to 220.127.116.11 due to a java incompatibility with the SAS/Access Interface to Hadoop. Since this was a fresh deployment, we handled the downgrade by simply wiping out the existing installation, using the HostCleanup.py script and ambari-server reset command, then redeploying. There were a few artifacts left over that caused warnings during configuration, but we were able to remove those and continue deploying.
Once it was fully deployed (and validated operational), we went to deploy the SAS Embedded Process, an extra service, and found that any attempt to change the server configuration was causing this glitch in the UI where we could not adjust the data directory away from /home/hadoop/hdfs/data, which is an invalid storage location.
We discovered that the original data directory, in /hadoop/hdfs/data, had not been wiped out after the initial deployment and still had the previous deployment's codes on all the folders. In order to reset this directory, we moved each data directory to /hadoop/hdfs/data.old, then rebooted the data nodes to get clean folders with the correct names that the name nodes expected.
For whatever reason, this mismatch in the folder name was causing the configuration UI to freak out and replace our changes with the default settings. Getting the correct data folders in place fixed our issue with the UI.