<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Failed to submit the map reduce job using hive cli? in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failed-to-submit-the-map-reduce-job-using-hive-cli/m-p/117894#M51022</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I have installed below Hadoop services and all services are running fine&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11209-services.png"&gt;services.png&lt;/A&gt; &lt;/P&gt;&lt;P&gt;The problem is that when i am executing below query from &lt;STRONG&gt;Hive cli &lt;/STRONG&gt;I am getting the exception in terminal as.&lt;/P&gt;&lt;P&gt; &lt;STRONG&gt;Query : select count(*) from mytables;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Exception:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;short exception description:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://&amp;lt;nanodeHostIp&amp;gt;:8020/hdp/apps/2.2.9.0-3393/mapreduce/mapreduce.tar.gz)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;please find attached exception file&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11210-exception.txt"&gt;exception.txt&lt;/A&gt;&lt;/P&gt;&lt;P&gt;and I have also checked path on HDFS &lt;STRONG&gt;"/hdp" which &lt;/STRONG&gt;is not completely present on file system so my question is, As have done fresh hadoop installation through Apache ambari then why ambari did not created files or directories under &lt;STRONG&gt;"/hdp" &lt;/STRONG&gt;directory automatically.&lt;/P&gt;&lt;P&gt;What is the solution for above problem?&lt;/P&gt;&lt;P&gt;Thanks in advance.&lt;/P&gt;</description>
    <pubDate>Sat, 07 Jan 2017 19:00:13 GMT</pubDate>
    <dc:creator>Manus</dc:creator>
    <dc:date>2017-01-07T19:00:13Z</dc:date>
    <item>
      <title>Failed to submit the map reduce job using hive cli?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failed-to-submit-the-map-reduce-job-using-hive-cli/m-p/117894#M51022</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I have installed below Hadoop services and all services are running fine&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11209-services.png"&gt;services.png&lt;/A&gt; &lt;/P&gt;&lt;P&gt;The problem is that when i am executing below query from &lt;STRONG&gt;Hive cli &lt;/STRONG&gt;I am getting the exception in terminal as.&lt;/P&gt;&lt;P&gt; &lt;STRONG&gt;Query : select count(*) from mytables;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Exception:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;short exception description:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://&amp;lt;nanodeHostIp&amp;gt;:8020/hdp/apps/2.2.9.0-3393/mapreduce/mapreduce.tar.gz)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;please find attached exception file&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11210-exception.txt"&gt;exception.txt&lt;/A&gt;&lt;/P&gt;&lt;P&gt;and I have also checked path on HDFS &lt;STRONG&gt;"/hdp" which &lt;/STRONG&gt;is not completely present on file system so my question is, As have done fresh hadoop installation through Apache ambari then why ambari did not created files or directories under &lt;STRONG&gt;"/hdp" &lt;/STRONG&gt;directory automatically.&lt;/P&gt;&lt;P&gt;What is the solution for above problem?&lt;/P&gt;&lt;P&gt;Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Sat, 07 Jan 2017 19:00:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failed-to-submit-the-map-reduce-job-using-hive-cli/m-p/117894#M51022</guid>
      <dc:creator>Manus</dc:creator>
      <dc:date>2017-01-07T19:00:13Z</dc:date>
    </item>
    <item>
      <title>Re: Failed to submit the map reduce job using hive cli?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failed-to-submit-the-map-reduce-job-using-hive-cli/m-p/117895#M51023</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/10447/manoj-dhake.html" nodeid="10447"&gt;@Manoj Dhake&lt;/A&gt;&lt;P&gt;Try to copy mapreduce tar file to hdfs and try hive query again. &lt;/P&gt;&lt;PRE&gt;#su - hdfs
#hdfs dfs -mkdir -p /hdp/apps/2.2.9.0-3393/mapreduce/
#hdfs dfs -put /usr/hdp/&amp;lt;version&amp;gt;/hadoop/mapreduce.tar.gz /hdp/apps/2.2.9.0-3393/mapreduce/mapreduce.tar.gz
&lt;/PRE&gt;</description>
      <pubDate>Sat, 07 Jan 2017 22:06:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failed-to-submit-the-map-reduce-job-using-hive-cli/m-p/117895#M51023</guid>
      <dc:creator>rguruvannagari</dc:creator>
      <dc:date>2017-01-07T22:06:51Z</dc:date>
    </item>
    <item>
      <title>Re: Failed to submit the map reduce job using hive cli?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failed-to-submit-the-map-reduce-job-using-hive-cli/m-p/117896#M51024</link>
      <description>&lt;P&gt;Thank you Rguruvannagari.&lt;/P&gt;&lt;P&gt;This solution really worked for me.&lt;/P&gt;</description>
      <pubDate>Sat, 07 Jan 2017 23:22:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failed-to-submit-the-map-reduce-job-using-hive-cli/m-p/117896#M51024</guid>
      <dc:creator>Manus</dc:creator>
      <dc:date>2017-01-07T23:22:32Z</dc:date>
    </item>
  </channel>
</rss>

