Support Questions

Find answers, ask questions, and share your expertise

HIVE: return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

avatar
Expert Contributor

HI, everyone,

 

i have setup Kerberos + Sentry service(not policy file), currently everything works fine except HIVE, 

 

"select * from table ", this statement is ok, it means if the statement no any condition, it can finish ok. but select count(*) from table or select * from table where xxx=xxx will appears errors like title.  that's so strange,  anybody has this experience?   thanks in advance.

 

the more details like below:

 

14/10/06 05:28:31 INFO mapreduce.Job: The url to track the job: http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/
14/10/06 05:28:31 INFO exec.Task: Starting Job = job_1412544483910_0001, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/
14/10/06 05:28:31 INFO exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job  -kill job_1412544483910_0001
14/10/06 05:28:31 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:32 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:33 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:34 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:35 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:36 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:37 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:38 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:40 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:41 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:43 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:45 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:46 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:48 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:50 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:53 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 INFO exec.Task: 2014-10-06 05:28:55,502 Stage-1 map = 0%,  reduce = 0%
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 ERROR exec.Task: Ended Job = job_1412544483910_0001 with errors
14/10/06 05:28:55 INFO impl.YarnClientImpl: Killed application application_1412544483910_0001
14/10/06 05:28:55 INFO log.PerfLogger: </PERFLOG method=task.MAPRED.Stage-1 start=1412544509576 end=1412544535559 duration=25983 from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/10/06 05:28:55 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1412544509575 end=1412544535560 duration=25985 from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 INFO ql.Driver: MapReduce Jobs Launched: 
14/10/06 05:28:55 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/10/06 05:28:55 INFO ql.Driver: Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL
14/10/06 05:28:55 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
14/10/06 05:28:55 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager:  about to release lock for default/**bleep**you
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager:  about to release lock for default
14/10/06 05:28:55 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1412544535562 end=1412544535579 duration=17 from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 ERROR operation.Operation: Error: 
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:146)
	at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
	at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
14/10/06 05:28:57 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()

 

 

 

1 ACCEPTED SOLUTION

avatar
Expert Contributor

i have got to resolve this issue.

 

at first i open debug model to check the details, but find nothing.

 

then i am going to open namenode:8088 to check the history file and container details, but container log can't open, means container doesn't exist

 

at last, i am going to hdfs /user directory by HUE file browse, and open some logs, found log has recorded  /tmp/history is permission denied.   delete /tmp/history, and try again, it's ok now.

 

 

View solution in original post

4 REPLIES 4

avatar
Expert Contributor

everyone, the below is my some tests, i am going to set HADOOP_YARN_HOME manually.

 

 

Test one:  if the home is hadoop-0.20-mapreduce, then it's ok.

 

[hdfs@namenode02 ~]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/
[hdfs@namenode02 ~]$ hive
14/10/06 06:59:04 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> select count(*) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_local1939864979_0001, Tracking URL = http://localhost:8080/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_local1939864979_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 06:59:14,364 Stage-1 map = 0%, reduce = 100%
Ended Job = job_local1939864979_0001
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 12904 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
0
Time taken: 4.095 seconds, Fetched: 1 row(s)
hive> exit;

 

 

TEST two: this is default, it menas i didn't change anyting, just test when i am login OS by hdfs, it's failed.

 

[hdfs@datanode03 ~]$ hive
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/10/06 07:03:27 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> select count(*) from test
> ;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1412549128740_0004, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0004/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 07:03:53,523 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1412549128740_0004 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

 

TEST THREE:   it's failed.

 

[hdfs@namenode02 hadoop-yarn]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop-yarn/

 

[hdfs@namenode02 hadoop-yarn]$ hive
14/10/06 06:44:38 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> show tables;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
hive> show tables;
OK
database_params
**bleep**you
sequence_table
tbls
test
test1
Time taken: 0.338 seconds, Fetched: 6 row(s)
hive> select count(*) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1412549128740_0003, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0003/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 06:54:19,156 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1412549128740_0003 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

 

 

 

the conclusion is , just when i set the HADOOP_YARN_HOME with *0.20-",   it will be fine, so what can i do right now ? 

avatar
Expert Contributor

i have got to resolve this issue.

 

at first i open debug model to check the details, but find nothing.

 

then i am going to open namenode:8088 to check the history file and container details, but container log can't open, means container doesn't exist

 

at last, i am going to hdfs /user directory by HUE file browse, and open some logs, found log has recorded  /tmp/history is permission denied.   delete /tmp/history, and try again, it's ok now.

 

 

avatar
New Contributor

I am seeing the same error, and cannot figure out a solution.  I am using kerberos and sentry policy file. CDH 5.3

 

15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO parse.ParseDriver: Parsing command: select id from firm
15/03/19 15:42:46 INFO parse.ParseDriver: Parse Completed
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=parse start=1426779766054 end=1426779766055 duration=1 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO conf.HiveAuthzConf: DefaultFS: hdfs://prodhadoop01-node-01:8020
15/03/19 15:42:46 INFO conf.HiveAuthzConf: DefaultFS: hdfs://prodhadoop01-node-01:8020
15/03/19 15:42:46 WARN mortbay.log: Using the deprecated config setting hive.sentry.server instead of sentry.hive.server
15/03/19 15:42:46 WARN mortbay.log: Using the deprecated config setting hive.sentry.provider instead of sentry.provider
15/03/19 15:42:46 WARN mortbay.log: Using the deprecated config setting hive.sentry.provider.resource instead of sentry.hive.provider.resource
15/03/19 15:42:46 INFO file.SimpleFileProviderBackend: Parsing /user/hive/sentry/sentry-provider.ini
15/03/19 15:42:46 INFO file.SimpleFileProviderBackend: Filesystem: hdfs://prodhadoop01-node-01:8020
15/03/19 15:42:46 INFO file.PolicyFiles: Opening /user/hive/sentry/sentry-provider.ini
15/03/19 15:42:46 INFO file.SimpleFileProviderBackend: Section databases needs no further processing
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Get metadata for source tables
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Get metadata for subqueries
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Get metadata for destination tables
15/03/19 15:42:46 INFO ql.Context: New scratch dir is hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-1
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Set stats collection dir : hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-1/-ext-10002
15/03/19 15:42:46 INFO ppd.OpProcFactory: Processing for FS(27)
15/03/19 15:42:46 INFO ppd.OpProcFactory: Processing for SEL(26)
15/03/19 15:42:46 INFO ppd.OpProcFactory: Processing for TS(25)
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1426779766137 end=1426779766137 duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
15/03/19 15:42:46 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable
15/03/19 15:42:46 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Completed plan generation
15/03/19 15:42:46 INFO ql.Driver: Semantic Analysis Completed
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1426779766055 end=1426779766138 duration=83 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO exec.ListSinkOperator: Initializing Self 28 OP
15/03/19 15:42:46 INFO exec.ListSinkOperator: Operator 28 OP initialized
15/03/19 15:42:46 INFO exec.ListSinkOperator: Initialization Done 28 OP
15/03/19 15:42:46 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:id, type:int, comment:null)], properties:null)
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=compile start=1426779766054 end=1426779766139 duration=85 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO lockmgr.DummyTxnManager: Creating lock manager of type org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
15/03/19 15:42:46 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=prodhadoop01-node-01:2181,prodhadoop01-node-04:2181,prodhadoop01-node-05:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager$DummyWatcher@1a65511f
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=acquireReadWriteLocks start=1426779766141 end=1426779766200 duration=59 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO ql.Driver: Starting command: select id from firm
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=PreHook.org.apache.sentry.binding.hive.HiveAuthzBindingPreExecHook from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=PreHook.org.apache.sentry.binding.hive.HiveAuthzBindingPreExecHook start=1426779766200 end=1426779766200 duration=0 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO ql.Driver: Total jobs = 1
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1426779766141 end=1426779766201 duration=60 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:42:46 INFO ql.Driver: Launching Job 1 out of 1
15/03/19 15:42:46 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
15/03/19 15:42:46 INFO ql.Context: New scratch dir is hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-6
15/03/19 15:42:46 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
15/03/19 15:42:46 INFO mr.ExecDriver: adding libjars: file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hive/lib/hive-hbase-handler-0.13.1-cdh5.2.4.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/lib/htrace-core.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/lib/htrace-core-2.04.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-hadoop-compat.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-server.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-common.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-protocol.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-hadoop2-compat.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-client.jar
15/03/19 15:42:46 INFO exec.Utilities: Processing alias firm
15/03/19 15:42:46 INFO exec.Utilities: Adding input file hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm
15/03/19 15:42:46 INFO exec.Utilities: Content Summary not cached for hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm
15/03/19 15:42:46 INFO ql.Context: New scratch dir is hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-6
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
15/03/19 15:42:46 INFO exec.Utilities: Serializing MapWork via kryo
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=serializePlan start=1426779766217 end=1426779766247 duration=30 from=org.apache.hadoop.hive.ql.exec.Utilities>
15/03/19 15:42:46 INFO client.RMProxy: Connecting to ResourceManager at prodhadoop01-node-01/10.0.2.156:8032
15/03/19 15:42:46 INFO client.RMProxy: Connecting to ResourceManager at prodhadoop01-node-01/10.0.2.156:8032
15/03/19 15:42:46 INFO exec.Utilities: No plan file found: hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-6/-mr-10004/2c0837da-5c34-4f5f-8685-8aaa2712d2dc/reduce.xml
15/03/19 15:42:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 187 for hive on 10.0.2.156:8020
15/03/19 15:42:46 INFO security.TokenCache: Got dt for hdfs://prodhadoop01-node-01:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 10.0.2.156:8020, Ident: (HDFS_DELEGATION_TOKEN token 187 for hive)
15/03/19 15:42:46 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/03/19 15:42:46 INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
15/03/19 15:42:46 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm; using filter path hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm
15/03/19 15:42:46 INFO input.FileInputFormat: Total input paths to process : 4
15/03/19 15:42:46 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 4, size left: 0
15/03/19 15:42:46 INFO io.CombineHiveInputFormat: number of splits 2
15/03/19 15:42:46 INFO log.PerfLogger: </PERFLOG method=getSplits start=1426779766808 end=1426779766821 duration=13 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
15/03/19 15:42:46 INFO mapreduce.JobSubmitter: number of splits:2
15/03/19 15:42:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1426777422106_0005
15/03/19 15:42:46 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 10.0.2.156:8020, Ident: (HDFS_DELEGATION_TOKEN token 187 for hive)
15/03/19 15:42:47 INFO impl.YarnClientImpl: Submitted application application_1426777422106_0005
15/03/19 15:42:47 INFO mapreduce.Job: The url to track the job: https://prodhadoop01-node-01:8090/proxy/application_1426777422106_0005/
15/03/19 15:42:47 INFO exec.Task: Starting Job = job_1426777422106_0005, Tracking URL = https://prodhadoop01-node-01:8090/proxy/application_1426777422106_0005/
15/03/19 15:42:47 INFO exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hadoop/bin/hadoop job  -kill job_1426777422106_0005
15/03/19 15:43:01 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
15/03/19 15:43:01 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/03/19 15:43:01 INFO exec.Task: 2015-03-19 15:43:01,246 Stage-1 map = 0%,  reduce = 0%
15/03/19 15:43:01 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/03/19 15:43:01 ERROR exec.Task: Ended Job = job_1426777422106_0005 with errors
15/03/19 15:43:01 INFO impl.YarnClientImpl: Killed application application_1426777422106_0005
15/03/19 15:43:01 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
15/03/19 15:43:01 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1426779766200 end=1426779781272 duration=15072 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:43:01 INFO ql.Driver: MapReduce Jobs Launched: 
15/03/19 15:43:01 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
15/03/19 15:43:01 INFO ql.Driver: Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
15/03/19 15:43:01 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
15/03/19 15:43:01 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:43:01 INFO ZooKeeperHiveLockManager:  about to release lock for pod03_ema/firm
15/03/19 15:43:01 INFO ZooKeeperHiveLockManager:  about to release lock for pod03_ema
15/03/19 15:43:01 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1426779781273 end=1426779781289 duration=16 from=org.apache.hadoop.hive.ql.Driver>
15/03/19 15:43:01 ERROR operation.Operation: Error running hive query: 
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:147)
	at org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:69)
	at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:200)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
	at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
	at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:213)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

avatar
New Contributor

Greetings

 

If by chance u are still looking to resolve  a return code 2 error while tunning hive, I may have a solution for u if u dont get any information from the log files.  Return code 2 is basically a camoflauge for an hadoop/yarn memory problem.  Basically, not enough resources configured into hadoop/yarn to run your projects   If u are running a single-node cluster ..see the link below

 

http://stackoverflow.com/questions/26540507/what-is-the-maximum-containers-in-a-single-node-cluster-...

 

U may be able to tweak the settings depending on your cluster setup.  If this does not cure your problem 100%, then at least the return code 2 or exit code 1 errors would disappear.   Hope this helps