Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Ambari UI overwriting changes to hdfs config

Highlighted

Ambari UI overwriting changes to hdfs config

New Contributor

Background:

We have deployed Hortonworks 2.6.2.14 on 7 servers on our local network. During the initial configuration, Ambari by default had the following two directories listed as the storage locations for hdfs namenode/data directories:

/hadoop/hdfs/namenode, /home/hadoop/hdfs/namenode

/hadoop/hdfs/data, /home/hadoop/hdfs/data

Curiously, this default value was giving an immediate error that /home paths should not be used for storing the hdfs directories. Obligingly, we removed the /home filepaths, so that the actually configured locations are only:

/hadoop/hdfs/namenode

/hadoop/hdfs/data

This configured just fine, and we can verify in Ambari that these are the settings that actually went through.

The Issue:

Now any time we attempt to modify service configurations, the HDFS tab updates its recommendations to include those two /home filepaths that were previously removed, then throws an error that "/home paths should not be used" and refuses to continue until we remove those paths. However, as soon as we delete both of the /home filepaths, Ambari puts them right back in without any warning. It's like playing whack-a-mole with the UI.

Has anybody experienced this before? How do we stop Ambari's configuration screen from putting in the default values over and over?

3 REPLIES 3

Re: Ambari UI overwriting changes to hdfs config

Expert Contributor
@Joshua Connelly

Thanks for reporting this issue.

From the description, this issue looks to be an ambari bug. can you please create Apache Ambari Jira for this ? Let us know on this thread when you create bug. We will look into it and address it in the next ambari release.

Also you mentioned in your description that this issue was noticed on HDP-2.6.2.14 and the tag to this question is of ambari-2.2.0

But ambari-2.2.0 does not support HDP-2.6.2.14. can you please verify these version related information for both HDP and Ambari ?

For now, to get around this issue, you will need to edit a file on ambari-server host at location /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/stack_advisor.py.

Comment out config items for now in this file at code link

Ambari will then not recommend any changes to these configs whenever you attempt to change any other configs on your cluster

Re: Ambari UI overwriting changes to hdfs config

New Contributor

Sorry, that must have been a mis-click. I just wanted to tag it with a generic "ambari" tag. We have ambari 2.6.0.0 and I have confirmed hadoop's version is 2.6.2.14. We cannot upgrade to 2.6.3.0 at this time due to an incompatibility with SAS

Re: Ambari UI overwriting changes to hdfs config

New Contributor

We managed to solve this issue through our own investigation. I have not seen any similar reports of this error, so I'll record what happened in case somebody in the future gets stuck on something simila

On this particular deployment, we initially deployed version 2.6.3.0, then had to downgrade to 2.6.2.14 due to a java incompatibility with the SAS/Access Interface to Hadoop. Since this was a fresh deployment, we handled the downgrade by simply wiping out the existing installation, using the HostCleanup.py script and ambari-server reset command, then redeploying. There were a few artifacts left over that caused warnings during configuration, but we were able to remove those and continue deploying.

Once it was fully deployed (and validated operational), we went to deploy the SAS Embedded Process, an extra service, and found that any attempt to change the server configuration was causing this glitch in the UI where we could not adjust the data directory away from /home/hadoop/hdfs/data, which is an invalid storage location.

We discovered that the original data directory, in /hadoop/hdfs/data, had not been wiped out after the initial deployment and still had the previous deployment's codes on all the folders. In order to reset this directory, we moved each data directory to /hadoop/hdfs/data.old, then rebooted the data nodes to get clean folders with the correct names that the name nodes expected.

For whatever reason, this mismatch in the folder name was causing the configuration UI to freak out and replace our changes with the default settings. Getting the correct data folders in place fixed our issue with the UI.

Don't have an account?
Coming from Hortonworks? Activate your account here