Reply
Highlighted
New Contributor
Posts: 3
Registered: ‎08-03-2018
Accepted Solution

Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp

Hi Team,

 

We are facing an issue where mapreduce jobs are failing with below error.

Distribution: Cloudera (CDH 5.14.4)

 

====

Exception in thread "main" java.io.FileNotFoundException: /tmp/hadoop-unjar5272208588996002870/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore$AsyncClient$revoke_privileges_call.class (No space left on device)

        at java.io.FileOutputStream.open0(Native Method)

        at java.io.FileOutputStream.open(FileOutputStream.java:270)

        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)

        at java.io.FileOutputStream.<init>(FileOutputStream.java:162)

        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:110)

        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:81)

        at org.apache.hadoop.util.RunJar.run(RunJar.java:214)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:141)

====

 

We figured out that MR framework tries to keep temporary class files under /tmp dir with a folder named hadoop-unjarxxxxxx.

As there is not enough space in /tmp to hold the class files, it fails with above error.

 

We want to change the default location for hadoop-unjarxxxxxx directory.

We changed the default value of hadoop.tmp.dir in core-site.xml (safety valve) to /ngs/app/${user.name}/hadoop/tmp , but it’s of no help.

 

Any suggestion on how to change the default hadoop-unjar location from /tmp to something else.

 

Regards,

Banshi.

Posts: 1,045
Topics: 1
Kudos: 263
Solutions: 131
Registered: ‎04-22-2014

Re: Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp

Hi @banshidhar ,

 

Since the stack trace shows RunJar.java being used, that indicates the Java option you need is:

 

java.io.tmpdir

 

If you can se that in your "Java Configuration" safety valves for Yarn that should help.

 

Since we don't see the whole stack trace in your post, we can't tell which safety valve would apply to that situation exactly.

New Contributor
Posts: 3
Registered: ‎08-03-2018

Re: Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp

Hi @bgooley ,

 

Thank you for your suggestion.

We set the same property and it fixed the issue.

 

Cloudera Manager --> YARN --> searched for: Gateway Client Environment Advanced Configuration Snippet (Safety Valve) for hadoop-env.sh and added this:

 

HADOOP_CLIENT_OPTS="-Djava.io.tmpdir=/ngs/app/$(whoami)/hadoop/tmp"

 

Regards,

Banshi.

Announcements