Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

OOM when executing hadoop fs -mkdir

Highlighted

OOM when executing hadoop fs -mkdir

New Contributor

Our cluster version is hdp-2.6.1, ambari-2.5.1. When the user executes the hadoop fs -mkdir command, an error occurs which is "create gc thread. out of system resources". We know that when the value of HADOOP_CLIENT_HEAP_SIZE is too small, the hadoop fs -ls command will appear OOM, but our cluster's HADOOP_CLIENT_HEAP_SIZE has been set to 8G, which we think is already a large enough value. We checked the user's ulimit, the maximum number of open files is 11000, and the maximum number of processes is 1024, but the number of processes at the time of the problem is unknown. Is the reason for the emergence of OOM not the two mentioned above? Is there any other possible reason?

2 REPLIES 2

Re: OOM when executing hadoop fs -mkdir

New Contributor

Our namenode heapsize utilization is about 90%. Is this possible reason?

Re: OOM when executing hadoop fs -mkdir

Guru

Hi Joe

one option is to increase the heap size and verify it. But you have mentioned already that the heap size provided is more than enough. So try clearing the namnode if something is not necessary as it would be one of the possible cause of this issue.

Hope it helps!!