Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive Filling up /tmp directory on local filesystem

avatar

Hi there,

We have a cluster with 2 hadoop namenodes and 3 datanodes. hive-server2 and hive-metastore are additionally running on each of the namenodes.

We're encountering an issue where particular jobs that are running seem to fill up the disk space on the local filesystem of the hive instance that our services are running against, causing jobs to fail and restart in an endless loop.

We initially had the configuration "hive.exec.local.scratchdir" set to /tmp/{user.name} and changed it to "hive.exec.scratchdir" in the hopes that that would cause it to write the tmp files to hdfs instead of a local directory with no success.

We have now also set "hive.exec.mode.local.auto" to true per the configuration descriptions on this page: https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties

and all of the baseline criteria to 0, in an attempt to force hive not to run anything in local mode but still no success 😞

How can I prevent hive from writing its scratchdir/tmp files locally?? We do not have enough disk space on any single instance to hold all of the tmp data it wants to create.

3 REPLIES 3

avatar
Guru

Is /tmp filling up on your client/gateway node or hiveserver2 node? If it is on the client node, are you using hive CLI or beeline?

I have looked at a cluster which runs few hundred queries a day using beeline. I looked at both gateway node and hiveserver2 node and don't see much data on /tmp. Can you list contents of /tmp with datasizes if you are using beeline and hiveserver2?

avatar

Responded in comment below.

avatar

/tmp is filling up on our hive-server2 node... I do not believe we have any sort of client/gateway node. We have beeline but do not use it, our app runs against hive directly.

The /tmp/hive directory is because we have user=hive. The binary strings represent different datasets that we store in hadoop and query via hive. The vast majority of the disk space is taken up by a minority of datasets (This is an early check, the directory continues to fill until our 200gb disk runs out of space, when the job dies, and then cleans up the /tmp directory)

root@hadoop-m-uscen-b-c001-n002:/tmp/hive# du -sh */
4.0K304dbcfe-6ba9-470f-9b2b-3c2f64f8d4eb/
7.1G72df6a01-9c1a-4de3-83be-9a239f86767f/
4.0Kf4dc7e2a-0cc9-415f-8d9e-e7115e0bbcea/
16Koperation_logs/


root@hadoop-m-uscen-b-c001-n002:/tmp/hive# du -sh */
4.0K304dbcfe-6ba9-470f-9b2b-3c2f64f8d4eb/
8.2G72df6a01-9c1a-4de3-83be-9a239f86767f/
4.0Kf4dc7e2a-0cc9-415f-8d9e-e7115e0bbcea/