Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB

avatar
Expert Contributor

I had 6 Zookeeper nodes and CM warned me that I should have 5 at most.

 

I stopped the entire cluster and deleted the zookeeper role from one of the 6 nodes (a "follower" was deleted).

 

Upon restarting the cluster, everything seemed just fine but now my attempts to used hdfs result in this error:

 

ubuntu@ip-10-0-0-157:~$ hdfs dfs -ls /
17/11/14 19:10:47 WARN retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB after 1 fail over attempts. Trying to fail over after sleeping for 787ms.
17/11/14 19:10:48 WARN retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB after 2 fail over attempts. Trying to fail over after sleeping for 1030ms.
17/11/14 19:10:49 WARN retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB after 3 fail over attempts. Trying to fail over after sleeping for 2930ms.

Importantly, this error only affects the default usage above.  If, instead, I specify the namenode then everything works normally.

 

ubuntu@ip-10-0-0-156:~$ hdfs dfs -ls hdfs://10.0.0.246:8020/
Found 3 items
drwxr-xr-x   - hdfs supergroup          0 2017-11-11 22:15 hdfs://10.0.0.246:8020/system
drwxrwxrwt   - hdfs supergroup          0 2016-02-07 15:08 hdfs://10.0.0.246:8020/tmp
drwxr-xr-x   - hdfs supergroup          0 2016-10-21 18:01 hdfs://10.0.0.246:8020/user

Note: I still have the old zookeeper node and can re-add that role to it if that might help.

 

1 ACCEPTED SOLUTION

avatar
Master Guru

@epowell, Yes, you are correct.  Client Configurations are managed separately from the configurations that servers use when CDH is managed by Cloudera Manager.  Deploy Client Configuration for your cluster to make sure the /etc/hadoop/conf files contain the latest configuration items.  Once that is done, you should be able to run commands just fine.

 

Ben

View solution in original post

10 REPLIES 10

avatar
Master Guru

Sorry for the late reply; glad you did it and now you know it was the perfect solution.

 

Cheers!