<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Quickstart VM Problem inserting data into hdfs through Hive in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Quickstart-VM-Problem-inserting-data-into-hdfs-through-Hive/m-p/3475#M595</link>
    <description>&lt;P&gt;Yes, /user/hive/warehouse/sample_results belongs to another user ('hive' or the one that created the table, depending on your Hive Configuration).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can see it or change the permissions with the File Browser and also get to the HDFS directory from the Metastore App.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Some more details about Hive queries permissions errors: &lt;A target="_blank" href="http://gethue.tumblr.com/post/64916325309/hadoop-tutorial-hive-query-editor-with-hiveserver2-and"&gt;http://gethue.tumblr.com/post/64916325309/hadoop-tutorial-hive-query-editor-with-hiveserver2-and&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 22 Nov 2013 19:25:52 GMT</pubDate>
    <dc:creator>Romainr</dc:creator>
    <dc:date>2013-11-22T19:25:52Z</dc:date>
    <item>
      <title>Quickstart VM Problem inserting data into hdfs through Hive</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Quickstart-VM-Problem-inserting-data-into-hdfs-through-Hive/m-p/3445#M593</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am using the Quickstart VM's Query Editor in Hue to execute a query which results in an 'Unable to move ...' exception when the results are being loaded into the destination file (see below). The original query is:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;insert overwrite table default.my_sample &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; select * from default.sample_07 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; where salary &amp;gt; 50000&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(A similar error is also generated when using INSERT INTO)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The destination table exists (and is empty). I'm assuming its a permissions problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Many thanks in advance for any suggestions for a fix.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Adrian&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Driver returned: 1.&amp;nbsp; Errors: OK&lt;BR /&gt;Hive history file=/tmp/hue/hive_job_log_a27f4957-79c6-4e7a-9a62-70be8bc9160f_794677910.txt&lt;BR /&gt;Total MapReduce jobs = 2&lt;BR /&gt;Launching Job 1 out of 2&lt;BR /&gt;Number of reduce tasks determined at compile time: 1&lt;BR /&gt;In order to change the average load for a reducer (in bytes):&lt;BR /&gt;&amp;nbsp; set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to limit the maximum number of reducers:&lt;BR /&gt;&amp;nbsp; set hive.exec.reducers.max=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to set a constant number of reducers:&lt;BR /&gt;&amp;nbsp; set mapred.reduce.tasks=&amp;lt;number&amp;gt;&lt;BR /&gt;Starting Job = job_201311200529_0007, Tracking URL = &lt;A target="_blank" href="http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201311200529_0007"&gt;http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201311200529_0007&lt;/A&gt;&lt;BR /&gt;Kill Command = /usr/lib/hadoop/bin/hadoop job&amp;nbsp; -kill job_201311200529_0007&lt;BR /&gt;Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1&lt;BR /&gt;2013-11-21 09:36:37,245 Stage-1 map = 0%,&amp;nbsp; reduce = 0%&lt;BR /&gt;2013-11-21 09:36:52,315 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:36:53,328 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:36:54,337 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:36:55,346 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:36:56,359 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:36:57,368 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:36:58,381 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:36:59,420 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:37:00,429 Stage-1 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 2.95 sec&lt;BR /&gt;2013-11-21 09:37:01,446 Stage-1 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 5.09 sec&lt;BR /&gt;2013-11-21 09:37:02,465 Stage-1 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 5.09 sec&lt;BR /&gt;2013-11-21 09:37:03,483 Stage-1 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 5.09 sec&lt;BR /&gt;2013-11-21 09:37:04,505 Stage-1 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 5.09 sec&lt;BR /&gt;2013-11-21 09:37:05,521 Stage-1 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 5.09 sec&lt;BR /&gt;2013-11-21 09:37:06,532 Stage-1 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 5.09 sec&lt;BR /&gt;MapReduce Total cumulative CPU time: 5 seconds 90 msec&lt;BR /&gt;Ended Job = job_201311200529_0007&lt;BR /&gt;Launching Job 2 out of 2&lt;BR /&gt;Number of reduce tasks determined at compile time: 1&lt;BR /&gt;In order to change the average load for a reducer (in bytes):&lt;BR /&gt;&amp;nbsp; set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to limit the maximum number of reducers:&lt;BR /&gt;&amp;nbsp; set hive.exec.reducers.max=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to set a constant number of reducers:&lt;BR /&gt;&amp;nbsp; set mapred.reduce.tasks=&amp;lt;number&amp;gt;&lt;BR /&gt;Starting Job = job_201311200529_0008, Tracking URL = &lt;A target="_blank" href="http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201311200529_0008"&gt;http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201311200529_0008&lt;/A&gt;&lt;BR /&gt;Kill Command = /usr/lib/hadoop/bin/hadoop job&amp;nbsp; -kill job_201311200529_0008&lt;BR /&gt;Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1&lt;BR /&gt;2013-11-21 09:37:13,467 Stage-2 map = 0%,&amp;nbsp; reduce = 0%&lt;BR /&gt;2013-11-21 09:37:19,508 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:20,519 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:21,527 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:22,537 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:23,548 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:24,557 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:25,565 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:26,574 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:27,584 Stage-2 map = 100%,&amp;nbsp; reduce = 0%, Cumulative CPU 0.83 sec&lt;BR /&gt;2013-11-21 09:37:28,598 Stage-2 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 2.33 sec&lt;BR /&gt;2013-11-21 09:37:29,612 Stage-2 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 2.33 sec&lt;BR /&gt;2013-11-21 09:37:30,621 Stage-2 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 2.33 sec&lt;BR /&gt;2013-11-21 09:37:31,631 Stage-2 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 2.33 sec&lt;BR /&gt;2013-11-21 09:37:32,642 Stage-2 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 2.33 sec&lt;BR /&gt;2013-11-21 09:37:33,655 Stage-2 map = 100%,&amp;nbsp; reduce = 100%, Cumulative CPU 2.33 sec&lt;BR /&gt;MapReduce Total cumulative CPU time: 2 seconds 330 msec&lt;BR /&gt;Ended Job = job_201311200529_0008&lt;BR /&gt;Loading data to table default.sample_results&lt;BR /&gt;Failed with exception Unable to move sourcehdfs://localhost.localdomain:8020/tmp/hive-beeswax-cloudera/hive_2013-11-21_09-36-30_244_9169755317961322966-1/-ext-10000 to destination /user/hive/warehouse/sample_results&lt;BR /&gt;FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask&lt;BR /&gt;MapReduce Jobs Launched:&lt;BR /&gt;Job 0: Map: 2&amp;nbsp; Reduce: 1&amp;nbsp;&amp;nbsp; Cumulative CPU: 5.09 sec&amp;nbsp;&amp;nbsp; HDFS Read: 92399 HDFS Write: 55809 SUCCESS&lt;BR /&gt;Job 1: Map: 1&amp;nbsp; Reduce: 1&amp;nbsp;&amp;nbsp; Cumulative CPU: 2.33 sec&amp;nbsp;&amp;nbsp; HDFS Read: 56197 HDFS Write: 45887 SUCCESS&lt;BR /&gt;Total MapReduce CPU Time Spent: 7 seconds 420 msec&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 14:02:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Quickstart-VM-Problem-inserting-data-into-hdfs-through-Hive/m-p/3445#M593</guid>
      <dc:creator>AdrianW</dc:creator>
      <dc:date>2026-04-21T14:02:23Z</dc:date>
    </item>
    <item>
      <title>Re: Quickstart VM Problem inserting data into hdfs through Hive</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Quickstart-VM-Problem-inserting-data-into-hdfs-through-Hive/m-p/3475#M595</link>
      <description>&lt;P&gt;Yes, /user/hive/warehouse/sample_results belongs to another user ('hive' or the one that created the table, depending on your Hive Configuration).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can see it or change the permissions with the File Browser and also get to the HDFS directory from the Metastore App.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Some more details about Hive queries permissions errors: &lt;A target="_blank" href="http://gethue.tumblr.com/post/64916325309/hadoop-tutorial-hive-query-editor-with-hiveserver2-and"&gt;http://gethue.tumblr.com/post/64916325309/hadoop-tutorial-hive-query-editor-with-hiveserver2-and&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 22 Nov 2013 19:25:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Quickstart-VM-Problem-inserting-data-into-hdfs-through-Hive/m-p/3475#M595</guid>
      <dc:creator>Romainr</dc:creator>
      <dc:date>2013-11-22T19:25:52Z</dc:date>
    </item>
    <item>
      <title>Re: Quickstart VM Problem inserting data into hdfs through Hive</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Quickstart-VM-Problem-inserting-data-into-hdfs-through-Hive/m-p/3631#M596</link>
      <description>&lt;P&gt;Thanks very much for the help. The destination table was created using SQL via the Impala query engine's ODBC interface. This resulted in the table being owned by user 'impala'&lt;/P&gt;</description>
      <pubDate>Mon, 02 Dec 2013 09:56:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Quickstart-VM-Problem-inserting-data-into-hdfs-through-Hive/m-p/3631#M596</guid>
      <dc:creator>AdrianW</dc:creator>
      <dc:date>2013-12-02T09:56:46Z</dc:date>
    </item>
  </channel>
</rss>

