Support Questions

Find answers, ask questions, and share your expertise

HDFS /tmp filesystem is filling up rapidly and expected to cause outage

avatar
New Contributor

In our Hadoop cluster (Cloudera distribution), we recently found that Hive Job started by a user create a 160 TB of files in '/tmp' location and it almost consumed remaining HDFS space and about to cause an outage. Later we troubleshoot and kill the particular job as we are unable to reach the user who started this job.

 

So now my question is - how could we able to set an alert for '/tmp' location if anyone created huge files or can we restrict the users using HDFS '/tmp' space?

 

Please share if you have any other suggestions.

6 REPLIES 6

avatar
Champion

@Srini4u

 

There are different options

 

1. If you have linux monitoring tools like Nagios, New Relic, ganglia, etc. You can set-up an alert for a file system (/tmp will be mounted on a file system) and trigger a mail if any file system running out of space

2. you can create a shell script to triger a mail based on the space availability and schedule via cron

avatar
New Contributor

@saranvisa, Thanks for your reply. 

 

I am talking about HDFS temp file system, not on the host machine temp file system. 

 

Please advise. 

avatar
Champion
I don't know HDFS quotas well enough but should fit the bill.

https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html

In CM you can configure alerts to notify you when disk and HDFS is nearing capacity.

avatar
New Contributor

@mbigelow Thanks for your reply too. 

 

"hdfs dfsadmin -setSpaceQuota" wont work on HDFS temp location. Need to find out some other alternative. 

 

avatar
Champion

Why won't it work?  Have you tried /tmp and /tmp/hive/<user.name>.

 

The alternative if quotas can't be applied to /tmp or its subdirs is to set alerts for HDFS capacity or disk space on the disks hosting the DFS directories.

avatar
Explorer

Setting quota will work. Queries will fail with quota errors.