Reply
New Contributor
Posts: 1
Registered: ‎08-25-2017

HDFS NameNode restart need not reflected on UI when assigning rack

Hi Community,

 

I have faced the following issues several times on a client and was able to reproduce on a lab of my own with latest versions of CM and CDH. I can't describe this as anything but a Bug or at least a miss on the UI and process:

 

Steps to Reproduce:

 

- On a cluster where all datanodes are assign to a specific rack (e.g. "/us-east/1d") add a node through the CM Add Host wizard.

- When adding the node it will be added automatically to the "default" rack, so to avoid it starting to be assigned there and generate un-wanted activity of balance across racks (this would be the first node on a new rack named "default"), choose the option to NOT start roles

- Upon adding the node but not starting it, assign it first to the proper rack same as the rest of the cluster

- After assign it to the correct rack, start the role

- Then do a client redeploy of configurations and a cluster refresh, any warning or alert should be cleared

 

At this point CM UI will show new host on the custom rack "/us-east/1d" however output of "hdfs dfsadmin -report" shows the node as part of the "default" group, this is an inconsitency and I'm not sure at this point which one is telling the truth!!!

 

Workaround I found to resolve is to do a rolling restart of Namenodes basically in case you have HA.

 

If command outputs are showing different to Cloudera UI there should be a warning somewhere that this is the case but there is not, so user is to think everything is ok when in fact is not.

 

Would like to know if you have any permanent solution for this, and also any recommendation to add a node through the wizard directly into a particular rack.

 

Regards,

 

Nicolas Parducci

Announcements