Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hdfs commands not working from StandBy namenode in High Availability environment

Solved Go to solution

hdfs commands not working from StandBy namenode in High Availability environment

Explorer

Dear fellow hadoopers,

I use HDPv2.2 sandbox for learning purposes. Some time ago I did all lab exercises for the Hortonworks Admin Course ("HDP Operations: Install and Manage with Apache Ambari"). Yesterday I turned the machine on and wanted to delve deeper into the administration. I noticed a strange thing - I can run "hdfs dfs -cat SOMEFILE" command on any file on the hdfs system from any of my nodes EXCEPT from the StandBy namenode.

The debugging ("HADOOP_ROOT_LOGGER='DEBUG,console' hdfs dfs -cat merged.txt") showed the following problem:

"INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over node1/172.17.0.2:8020. Trying to fail over immediately. org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby"

On the StandBy namenode (node1) I can only run "hdfs dfs -ls" and "hdfs dfs -put" commands everything else fails with NullPointerException.

The checked the service states of both standby (node1, standby) and active (node4, active) namenodes using (hdfs haadmin -getServiceState).

Does anyone have an idea of what could be wrong? Why in case of my standby node the hdfs commands are not sent to the active node?

Thank you very much

jaro

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: hdfs commands not working from StandBy namenode in High Availability environment

Explorer

well, this was indeed unnecessary strugle... :-(

I found the root cause in the logs of the now active namenode ("java.io.IOException: Cannot run program "/etc/hadoop/conf/rack-topology.sh")

when I enabled the High Availability in the cluster as a part of the exercises required by the admin course I simply forgot to provide the new namenode with the rack-topology.sh script (changing the topology was another prior exercise).

now all hdfs commands can be run from both namenodes

View solution in original post

1 REPLY 1
Highlighted

Re: hdfs commands not working from StandBy namenode in High Availability environment

Explorer

well, this was indeed unnecessary strugle... :-(

I found the root cause in the logs of the now active namenode ("java.io.IOException: Cannot run program "/etc/hadoop/conf/rack-topology.sh")

when I enabled the High Availability in the cluster as a part of the exercises required by the admin course I simply forgot to provide the new namenode with the rack-topology.sh script (changing the topology was another prior exercise).

now all hdfs commands can be run from both namenodes

View solution in original post

Don't have an account?
Coming from Hortonworks? Activate your account here