We set the hive_user_nofile_limit param to 32000 in the Ambari UI.
Indeed we see this value in /etc/security/limits.d/hive.conf
in /proc/<hive-pid>/limits I don't see this value. instead I see the original 4096 for soft and hard Max open files (after restart).
How do I apply this change to the hive process?
Also - I can see that all the open files are in /tmp/hive/operation_logs.. how necessary are these logs? Should I just bypass the above error by disabling its writing?
@Michael Belostoky: I have faced the same issue and this is the suggestion from SME. The issue can be caused by Ambari agent, so he advised me to make sure ambari-agent has been restarted.
If you haven't done, would you mind restarting agent, then restarting data nodes from Ambari again, then please check limit by using the following command:
cat /proc/`lsof -ti:8010`/limits | grep 'open files\|processes'
As a work around hope this article will help you:
I have workaround for this issue
Add in file hadoop-env.sh line:
ulimit -n 32000