08-01-2017 07:43 AM
In our Hadoop cluster (Cloudera distribution), we recently found that Hive Job started by a user create a 160 TB of files in '/tmp' location and it almost consumed remaining HDFS space and about to cause an outage. Later we troubleshoot and kill the particular job as we are unable to reach the user who started this job.
So now my question is - how could we able to set an alert for '/tmp' location if anyone created huge files or can we restrict the users using HDFS '/tmp' space?
Please share if you have any other suggestions.
08-01-2017 09:56 AM
There are different options
1. If you have linux monitoring tools like Nagios, New Relic, ganglia, etc. You can set-up an alert for a file system (/tmp will be mounted on a file system) and trigger a mail if any file system running out of space
2. you can create a shell script to triger a mail based on the space availability and schedule via cron
08-01-2017 10:41 AM
08-02-2017 07:08 AM
@saranvisa, Thanks for your reply.
I am talking about HDFS temp file system, not on the host machine temp file system.
08-02-2017 07:10 AM
@mbigelow Thanks for your reply too.
"hdfs dfsadmin -setSpaceQuota" wont work on HDFS temp location. Need to find out some other alternative.
08-02-2017 08:11 AM
Why won't it work? Have you tried /tmp and /tmp/hive/<user.name>.
The alternative if quotas can't be applied to /tmp or its subdirs is to set alerts for HDFS capacity or disk space on the disks hosting the DFS directories.