Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Very slow hdfs command responses in cluster members other than namenodes

Highlighted

Very slow hdfs command responses in cluster members other than namenodes

Hi,

I tried to find answers in previous posts, but I didn't find them.

Well, I have very slow responses from commands like "hdfs dfs -ls /" executed in cluster members other than namenodes. Comparing responses, a simple "hdfs dfs -ls /" in a namenode lasts from 2 to 3 seconds, while in any other cluster computer this time is 22 seconds. I tried to debug the process but I can´t find anything different among them. When answer is slow, it always stops for 20 seconds after "DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3c77d488" and before "DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@130263db: starting with interruptCheckPeriodMs = 60000".

Any help?

Best regards,

Silvio

7 REPLIES 7

Re: Very slow hdfs command responses in cluster members other than namenodes

Hi @Silvio del Val. Is there good communication performance between the other nodes and the NameNode in general? Try to test how quickly ping works, for example. Those commands will contact the namenode for the information you request, so I'm thinking there might be some problem with your network performance.

Highlighted

Re: Very slow hdfs command responses in cluster members other than namenodes

Hi Ana,

Thanks for your answer. Well, all cluster machines are connected to the same physical switches, so I don't think it's a network problem.

I think it has to be something regarding configs...but I don't know.

Highlighted

Re: Very slow hdfs command responses in cluster members other than namenodes

hm, it's hard to tell. I doubt it would be any hadoop-specific configuration because it's fast on the NameNode machine and you would have the same *-site.xml files on the others. Are those other cluster members VMs or still physical? Do they all consistently behave the same way? What else is running on them?

Also have you had a look at whether those other cluster machines are busy - e.g. if they have enough free memory to run the JVM for example.

Highlighted

Re: Very slow hdfs command responses in cluster members other than namenodes

Rising Star

Can you try "export HADOOP_ROOT_LOGGER=TRACE,console" before running "hdfs dfs -ls /"? That will reveal more end-to-end RPC related traces for the root cause.

Highlighted

Re: Very slow hdfs command responses in cluster members other than namenodes

Well, I tried to debug some days ago but I didn't understand why it was stopping 20 seconds in a single point after the command.

In nodes when the query is slow, it always stops here:

17/02/23 12:28:50 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@1a942c18
17/02/23 12:29:10 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@173ba36d: starting with interruptCheckPeriodMs = 60000

It seems to take 20 seconds (and always 20 seconds) to get the client out of cache, while in namenodes it takes no time, that's why queries are faster....but I don't know what "get the client out of cache" means....

Highlighted

Re: Very slow hdfs command responses in cluster members other than namenodes

Explorer

What is your version of Hadoop? Could you post the output from "hadoop -version"?

Highlighted

Re: Very slow hdfs command responses in cluster members other than namenodes

New Contributor

clear /etc/resolv.conf

I think the problem is resolved dns。 @Silvio del Val

,

clear /etc/resolv.conf

I think the problem is resolved dns。

@Silvio del Val

Don't have an account?
Coming from Hortonworks? Activate your account here