In our Hadoop cluster (Cloudera distribution), we recently found that Hive Job started by a user create a 160 TB of files in '/tmp' location and it almost consumed remaining HDFS space and about to cause an outage. Later we troubleshoot and kill the particular job as we are unable to reach the user who started this job.
So now my question is - how could we able to set an alert for '/tmp' location if anyone created huge files or can we restrict the users using HDFS '/tmp' space?
Please share if you have any other suggestions.
There are different options
1. If you have linux monitoring tools like Nagios, New Relic, ganglia, etc. You can set-up an alert for a file system (/tmp will be mounted on a file system) and trigger a mail if any file system running out of space
2. you can create a shell script to triger a mail based on the space availability and schedule via cron
Why won't it work? Have you tried /tmp and /tmp/hive/<user.name>.
The alternative if quotas can't be applied to /tmp or its subdirs is to set alerts for HDFS capacity or disk space on the disks hosting the DFS directories.