Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Quickstart VM Problem inserting data into hdfs through Hive

avatar
New Contributor

 

I am using the Quickstart VM's Query Editor in Hue to execute a query which results in an 'Unable to move ...' exception when the results are being loaded into the destination file (see below). The original query is:

 

insert overwrite table default.my_sample                                 select * from default.sample_07                                 where salary > 50000

 

(A similar error is also generated when using INSERT INTO)

 

The destination table exists (and is empty). I'm assuming its a permissions problem.

 

Many thanks in advance for any suggestions for a fix.

 

Adrian

 

Driver returned: 1.  Errors: OK
Hive history file=/tmp/hue/hive_job_log_a27f4957-79c6-4e7a-9a62-70be8bc9160f_794677910.txt
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201311200529_0007, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201311200529_0007
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_201311200529_0007
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2013-11-21 09:36:37,245 Stage-1 map = 0%,  reduce = 0%
2013-11-21 09:36:52,315 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:36:53,328 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:36:54,337 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:36:55,346 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:36:56,359 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:36:57,368 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:36:58,381 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:36:59,420 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:37:00,429 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2013-11-21 09:37:01,446 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.09 sec
2013-11-21 09:37:02,465 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.09 sec
2013-11-21 09:37:03,483 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.09 sec
2013-11-21 09:37:04,505 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.09 sec
2013-11-21 09:37:05,521 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.09 sec
2013-11-21 09:37:06,532 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.09 sec
MapReduce Total cumulative CPU time: 5 seconds 90 msec
Ended Job = job_201311200529_0007
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201311200529_0008, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201311200529_0008
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_201311200529_0008
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-11-21 09:37:13,467 Stage-2 map = 0%,  reduce = 0%
2013-11-21 09:37:19,508 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:20,519 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:21,527 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:22,537 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:23,548 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:24,557 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:25,565 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:26,574 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:27,584 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2013-11-21 09:37:28,598 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.33 sec
2013-11-21 09:37:29,612 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.33 sec
2013-11-21 09:37:30,621 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.33 sec
2013-11-21 09:37:31,631 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.33 sec
2013-11-21 09:37:32,642 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.33 sec
2013-11-21 09:37:33,655 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.33 sec
MapReduce Total cumulative CPU time: 2 seconds 330 msec
Ended Job = job_201311200529_0008
Loading data to table default.sample_results
Failed with exception Unable to move sourcehdfs://localhost.localdomain:8020/tmp/hive-beeswax-cloudera/hive_2013-11-21_09-36-30_244_9169755317961322966-1/-ext-10000 to destination /user/hive/warehouse/sample_results
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
MapReduce Jobs Launched:
Job 0: Map: 2  Reduce: 1   Cumulative CPU: 5.09 sec   HDFS Read: 92399 HDFS Write: 55809 SUCCESS
Job 1: Map: 1  Reduce: 1   Cumulative CPU: 2.33 sec   HDFS Read: 56197 HDFS Write: 45887 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 420 msec

1 ACCEPTED SOLUTION

avatar
Super Guru

Yes, /user/hive/warehouse/sample_results belongs to another user ('hive' or the one that created the table, depending on your Hive Configuration).

 

You can see it or change the permissions with the File Browser and also get to the HDFS directory from the Metastore App.

 

Some more details about Hive queries permissions errors: http://gethue.tumblr.com/post/64916325309/hadoop-tutorial-hive-query-editor-with-hiveserver2-and

View solution in original post

2 REPLIES 2

avatar
Super Guru

Yes, /user/hive/warehouse/sample_results belongs to another user ('hive' or the one that created the table, depending on your Hive Configuration).

 

You can see it or change the permissions with the File Browser and also get to the HDFS directory from the Metastore App.

 

Some more details about Hive queries permissions errors: http://gethue.tumblr.com/post/64916325309/hadoop-tutorial-hive-query-editor-with-hiveserver2-and

avatar
New Contributor

Thanks very much for the help. The destination table was created using SQL via the Impala query engine's ODBC interface. This resulted in the table being owned by user 'impala'