<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/282052#M209724</link>
    <description>&lt;P&gt;hi &lt;SPAN&gt;&lt;A href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/35469" target="_blank" rel="noopener"&gt;Abhilash&lt;/A&gt;,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I am facing same issue while running concurrent sessions at the same point.&lt;BR /&gt;Did u find a solution to change the temp directory ?&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 05 Nov 2019 06:42:11 GMT</pubDate>
    <dc:creator>sow</dc:creator>
    <dc:date>2019-11-05T06:42:11Z</dc:date>
    <item>
      <title>Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/89762#M32930</link>
      <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are facing an issue where mapreduce jobs are failing with below error.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Distribution:&lt;/U&gt;&lt;/STRONG&gt; Cloudera (CDH 5.14.4)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;====&lt;/P&gt;&lt;P&gt;Exception in thread "main" java.io.FileNotFoundException: /tmp/hadoop-unjar5272208588996002870/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore$AsyncClient$revoke_privileges_call.class (&lt;STRONG&gt;&lt;U&gt;No space left on device&lt;/U&gt;&lt;/STRONG&gt;)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.FileOutputStream.open0(Native Method)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.FileOutputStream.open(FileOutputStream.java:270)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.FileOutputStream.&amp;lt;init&amp;gt;(FileOutputStream.java:213)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.FileOutputStream.&amp;lt;init&amp;gt;(FileOutputStream.java:162)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.util.RunJar.unJar(RunJar.java:110)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.util.RunJar.unJar(RunJar.java:81)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.util.RunJar.run(RunJar.java:214)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.util.RunJar.main(RunJar.java:141)&lt;/P&gt;&lt;P&gt;====&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We figured out that MR framework tries to keep temporary class files under /tmp dir with a folder named &lt;STRONG&gt;&lt;U&gt;hadoop-unjarxxxxxx.&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;As there is not enough space in /tmp to hold the class files, it fails with above error.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We want to change the default location for &lt;STRONG&gt;&lt;U&gt;hadoop-unjarxxxxxx&lt;/U&gt;&lt;/STRONG&gt; directory.&lt;/P&gt;&lt;P&gt;We changed the default value of &lt;STRONG&gt;&lt;U&gt;hadoop.tmp.dir&lt;/U&gt;&lt;/STRONG&gt; in core-site.xml (safety valve) to &lt;STRONG&gt;&lt;U&gt;/ngs/app/${user.name}/hadoop/tmp&lt;/U&gt;&lt;/STRONG&gt; , but it’s of no help.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any suggestion on how to change the default &lt;STRONG&gt;hadoop-unjar&lt;/STRONG&gt; location from &lt;STRONG&gt;/tmp&lt;/STRONG&gt; to something else.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Banshi.&lt;/P&gt;</description>
      <pubDate>Tue, 30 Apr 2019 12:02:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/89762#M32930</guid>
      <dc:creator>banshidhar_saho</dc:creator>
      <dc:date>2019-04-30T12:02:42Z</dc:date>
    </item>
    <item>
      <title>Re: Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/89777#M32931</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/28293"&gt;@banshidhar_saho&lt;/a&gt; ,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Since the stack trace shows RunJar.java being used, that indicates the Java option you need is:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;java.io.tmpdir&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you can se that in your "Java Configuration" safety valves for Yarn that should help.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Since we don't see the whole stack trace in your post, we can't tell which safety valve would apply to that situation exactly.&lt;/P&gt;</description>
      <pubDate>Tue, 30 Apr 2019 17:18:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/89777#M32931</guid>
      <dc:creator>bgooley</dc:creator>
      <dc:date>2019-04-30T17:18:51Z</dc:date>
    </item>
    <item>
      <title>Re: Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/89833#M32932</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/4054"&gt;@bgooley&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for your suggestion.&lt;/P&gt;&lt;P&gt;We set the same property and it fixed the issue.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Cloudera Manager --&amp;gt; YARN --&amp;gt; searched for: Gateway Client Environment Advanced Configuration Snippet (Safety Valve) for hadoop-env.sh and added this:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;HADOOP_CLIENT_OPTS="-Djava.io.tmpdir=/ngs/app/$(whoami)/hadoop/tmp"&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Banshi.&lt;/P&gt;</description>
      <pubDate>Thu, 02 May 2019 12:18:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/89833#M32932</guid>
      <dc:creator>banshidhar_saho</dc:creator>
      <dc:date>2019-05-02T12:18:02Z</dc:date>
    </item>
    <item>
      <title>Re: Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/93557#M57241</link>
      <description>&lt;P&gt;Hello ,&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are facing the same issue in our production environment. Is this creation of temp files is default behaviour and we can't change it ?. As we are running multiple concurrent sessions at same point of time we are facing multiple job failures with with this inodes issue . Is there any other solution we have instead of changing the path.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I just want to know in what cases the folder will be created . We are using hive cli, beeline , hadoop shell commands and sqoop process in our project.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have observed that the default behaviour will delete the folders automatically as soon as the activity complete.but some times some folders are not getting deleted and occupying more inodes (12k) for folder structure.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help to make us understand .&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jul 2019 04:21:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/93557#M57241</guid>
      <dc:creator>Abhilash</dc:creator>
      <dc:date>2019-07-31T04:21:55Z</dc:date>
    </item>
    <item>
      <title>Re: Mapreduce job failure due to hadoop-unjarxxxxxx directory under /tmp</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/282052#M209724</link>
      <description>&lt;P&gt;hi &lt;SPAN&gt;&lt;A href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/35469" target="_blank" rel="noopener"&gt;Abhilash&lt;/A&gt;,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I am facing same issue while running concurrent sessions at the same point.&lt;BR /&gt;Did u find a solution to change the temp directory ?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 05 Nov 2019 06:42:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Mapreduce-job-failure-due-to-hadoop-unjarxxxxxx-directory/m-p/282052#M209724</guid>
      <dc:creator>sow</dc:creator>
      <dc:date>2019-11-05T06:42:11Z</dc:date>
    </item>
  </channel>
</rss>

