Welcome to the Cloudera Community

Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Who agreed with this topic

Unable to run hive job

avatar
Explorer

Hi,

I recently installed Hadoop version 2.6.0 with Hive version 1.1.0 on CDH5.

When I run a hive job it does not start a MapReduce job.

Any suggestion?


-


hive> select max(id1) from foo where id1>id2;
Query ID = origin_20150925120606_e7226346-4547-45ad-a99e-8ca1b629bd60
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1443147339086_0006, Tracking URL = http://cluster1.xxx.local:8088/proxy/application_1443147339086_0006/
Kill Command = /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/bin/hadoop job -kill job_1443147339086_0006

-

[root@cluster1 hive]# cat /tmp/origin/hive.log
2015-09-25 12:05:48,373 INFO [main]: session.SessionState (SessionState.java:createPath(625)) - Created local directory: /tmp/68383dac-0397-491c-a1e3-9c844c3afdcf_resources
2015-09-25 12:05:48,390 INFO [main]: session.SessionState (SessionState.java:createPath(625)) - Created HDFS directory: /tmp/hive/origin/68383dac-0397-491c-a1e3-9c844c3afdcf
2015-09-25 12:05:48,392 INFO [main]: session.SessionState (SessionState.java:createPath(625)) - Created local directory: /tmp/origin/68383dac-0397-491c-a1e3-9c844c3afdcf
2015-09-25 12:05:48,397 INFO [main]: session.SessionState (SessionState.java:createPath(625)) - Created HDFS directory: /tmp/hive/origin/68383dac-0397-491c-a1e3-9c844c3afdcf/_tmp_space.db
2015-09-25 12:05:48,398 INFO [main]: session.SessionState (SessionState.java:start(527)) - No Tez session required at this point. hive.execution.engine=mr.
2015-09-25 12:06:06,284 ERROR [main]: hdfs.KeyProviderCache (KeyProviderCache.java:createKeyProviderURI(87)) - Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
2015-09-25 12:06:06,662 INFO [main]: ql.Driver (Driver.java:compile(434)) - Semantic Analysis Completed
2015-09-25 12:06:06,686 INFO [main]: ql.Driver (Driver.java:getSchema(238)) - Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:int, comment:null)], properties:null)
2015-09-25 12:06:07,797 INFO [main]: ql.Driver (Driver.java:execute(1318)) - Starting command(queryId=origin_20150925120606_e7226346-4547-45ad-a99e-8ca1b629bd60): select max(id1) from foo where id1>id2
2015-09-25 12:06:07,798 INFO [main]: ql.Driver (SessionState.java:printInfo(912)) - Query ID = origin_20150925120606_e7226346-4547-45ad-a99e-8ca1b629bd60
2015-09-25 12:06:07,798 INFO [main]: ql.Driver (SessionState.java:printInfo(912)) - Total jobs = 1
2015-09-25 12:06:07,811 INFO [main]: ql.Driver (SessionState.java:printInfo(912)) - Launching Job 1 out of 1
2015-09-25 12:06:07,814 INFO [main]: ql.Driver (Driver.java:launchTask(1636)) - Starting task [Stage-1:MAPRED] in serial mode
2015-09-25 12:06:07,814 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - Number of reduce tasks determined at compile time: 1
2015-09-25 12:06:07,814 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - In order to change the average load for a reducer (in bytes):
2015-09-25 12:06:07,814 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - set hive.exec.reducers.bytes.per.reducer=<number>
2015-09-25 12:06:07,816 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - In order to limit the maximum number of reducers:
2015-09-25 12:06:07,816 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - set hive.exec.reducers.max=<number>
2015-09-25 12:06:07,816 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - In order to set a constant number of reducers:
2015-09-25 12:06:07,816 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - set mapreduce.job.reduces=<number>
2015-09-25 12:06:07,833 INFO [main]: mr.ExecDriver (ExecDriver.java:execute(286)) - Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
2015-09-25 12:06:07,836 INFO [main]: mr.ExecDriver (ExecDriver.java:execute(308)) - adding libjars: file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.4.7.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/hbase-hadoop-compat.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/lib/htrace-core.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/lib/htrace-core-3.0.4.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/lib/htrace-core-3.1.0-incubating.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/hbase-client.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/hbase-common.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/hbase-hadoop2-compat.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/hbase-protocol.jar,file:///opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/hbase-server.jar,file:///usr/share/cmf/lib/postgresql-9.0-801.jdbc4.jar
2015-09-25 12:06:08,171 ERROR [main]: mr.ExecDriver (ExecDriver.java:execute(398)) - yarn
2015-09-25 12:06:08,518 WARN [main]: mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(153)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2015-09-25 12:06:10,079 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - Starting Job = job_1443147339086_0006, Tracking URL = http://cluster1.xxx.local:8088/proxy/application_1443147339086_0006/
2015-09-25 12:06:10,080 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - Kill Command = /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/bin/hadoop job -kill job_1443147339086_0006
2015-09-25 13:00:32,780 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2015-09-25 13:00:32,821 WARN [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2015-09-25 13:00:32,822 INFO [main]: exec.Task (SessionState.java:printInfo(912)) - 2015-09-25 13:00:32,819 Stage-1 map = 0%, reduce = 0%
2015-09-25 13:00:32,824 WARN [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2015-09-25 13:00:32,831 ERROR [main]: exec.Task (SessionState.java:printError(921)) - Ended Job = job_1443147339086_0006 with errors
2015-09-25 13:00:32,832 ERROR [Thread-47]: exec.Task (SessionState.java:printError(921)) - Error during job, obtaining debugging information...
2015-09-25 13:00:32,888 ERROR [main]: ql.Driver (SessionState.java:printError(921)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2015-09-25 13:00:32,889 INFO [main]: ql.Driver (SessionState.java:printInfo(912)) - MapReduce Jobs Launched:
2015-09-25 13:00:32,893 WARN [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2015-09-25 13:00:32,896 INFO [main]: ql.Driver (SessionState.java:printInfo(912)) - Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
2015-09-25 13:00:32,896 INFO [main]: ql.Driver (SessionState.java:printInfo(912)) - Total MapReduce CPU Time Spent: 0 msec

Who agreed with this topic