Our cluster version is hdp-2.6.1, ambari-2.5.1. When the user executes the hadoop fs -mkdir command, an error occurs which is "create gc thread. out of system resources". We know that when the value of HADOOP_CLIENT_HEAP_SIZE is too small, the hadoop fs -ls command will appear OOM, but our cluster's HADOOP_CLIENT_HEAP_SIZE has been set to 8G, which we think is already a large enough value. We checked the user's ulimit, the maximum number of open files is 11000, and the maximum number of processes is 1024, but the number of processes at the time of the problem is unknown. Is the reason for the emergence of OOM not the two mentioned above? Is there any other possible reason?
one option is to increase the heap size and verify it. But you have mentioned already that the heap size provided is more than enough. So try clearing the namnode if something is not necessary as it would be one of the possible cause of this issue.