<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question HIVE: return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/19750#M3185</link>
    <description>&lt;P&gt;HI, everyone,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i have setup Kerberos + Sentry service(not policy file), currently everything works fine except HIVE,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;"select * from table ", this statement is ok, it means if the statement no any condition, it can finish ok. but select count(*) from table or select * from table where xxx=xxx will appears errors like title. &amp;nbsp;that's so strange, &amp;nbsp;anybody has this experience? &amp;nbsp; thanks in advance.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;the more details like below:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;14/10/06 05:28:31 INFO mapreduce.Job: The url to track the job: &lt;A target="_blank" href="http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/"&gt;http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/&lt;/A&gt;
14/10/06 05:28:31 INFO exec.Task: Starting Job = job_1412544483910_0001, Tracking URL = &lt;A target="_blank" href="http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/"&gt;http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/&lt;/A&gt;
14/10/06 05:28:31 INFO exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job  -kill job_1412544483910_0001
14/10/06 05:28:31 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:32 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:33 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:34 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:35 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:36 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:37 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:38 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:40 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:41 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:43 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:45 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:46 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:48 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:50 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:53 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 INFO exec.Task: 2014-10-06 05:28:55,502 Stage-1 map = 0%,  reduce = 0%
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 ERROR exec.Task: Ended Job = job_1412544483910_0001 with errors
14/10/06 05:28:55 INFO impl.YarnClientImpl: Killed application application_1412544483910_0001
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;/PERFLOG method=task.MAPRED.Stage-1 start=1412544509576 end=1412544535559 duration=25983 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;/PERFLOG method=Driver.execute start=1412544509575 end=1412544535560 duration=25985 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 INFO ql.Driver: MapReduce Jobs Launched: 
14/10/06 05:28:55 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/10/06 05:28:55 INFO ql.Driver: Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL
14/10/06 05:28:55 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager:  about to release lock for default/**bleep**you
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager:  about to release lock for default
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;/PERFLOG method=releaseLocks start=1412544535562 end=1412544535579 duration=17 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 ERROR operation.Operation: Error: 
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:146)
	at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
	at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
14/10/06 05:28:57 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 09:09:11 GMT</pubDate>
    <dc:creator>iamfromsky</dc:creator>
    <dc:date>2022-09-16T09:09:11Z</dc:date>
    <item>
      <title>HIVE: return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/19750#M3185</link>
      <description>&lt;P&gt;HI, everyone,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i have setup Kerberos + Sentry service(not policy file), currently everything works fine except HIVE,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;"select * from table ", this statement is ok, it means if the statement no any condition, it can finish ok. but select count(*) from table or select * from table where xxx=xxx will appears errors like title. &amp;nbsp;that's so strange, &amp;nbsp;anybody has this experience? &amp;nbsp; thanks in advance.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;the more details like below:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;14/10/06 05:28:31 INFO mapreduce.Job: The url to track the job: &lt;A target="_blank" href="http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/"&gt;http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/&lt;/A&gt;
14/10/06 05:28:31 INFO exec.Task: Starting Job = job_1412544483910_0001, Tracking URL = &lt;A target="_blank" href="http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/"&gt;http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/&lt;/A&gt;
14/10/06 05:28:31 INFO exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job  -kill job_1412544483910_0001
14/10/06 05:28:31 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:32 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:33 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:34 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:35 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:36 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:37 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:38 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:40 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:41 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:43 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:45 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:46 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:48 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:50 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:53 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 INFO exec.Task: 2014-10-06 05:28:55,502 Stage-1 map = 0%,  reduce = 0%
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 ERROR exec.Task: Ended Job = job_1412544483910_0001 with errors
14/10/06 05:28:55 INFO impl.YarnClientImpl: Killed application application_1412544483910_0001
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;/PERFLOG method=task.MAPRED.Stage-1 start=1412544509576 end=1412544535559 duration=25983 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;/PERFLOG method=Driver.execute start=1412544509575 end=1412544535560 duration=25985 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 INFO ql.Driver: MapReduce Jobs Launched: 
14/10/06 05:28:55 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/10/06 05:28:55 INFO ql.Driver: Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL
14/10/06 05:28:55 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager:  about to release lock for default/**bleep**you
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager:  about to release lock for default
14/10/06 05:28:55 INFO log.PerfLogger: &amp;lt;/PERFLOG method=releaseLocks start=1412544535562 end=1412544535579 duration=17 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
14/10/06 05:28:55 ERROR operation.Operation: Error: 
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:146)
	at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
	at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
14/10/06 05:28:57 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:09:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/19750#M3185</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2022-09-16T09:09:11Z</dc:date>
    </item>
    <item>
      <title>Re: HIVE: return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/19752#M3186</link>
      <description>&lt;P&gt;everyone,&amp;nbsp;the below is my some tests, i am going to set HADOOP_YARN_HOME manually.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Test one: &amp;nbsp;if the home is&amp;nbsp;&lt;SPAN&gt;hadoop-0.20-mapreduce, then it's ok.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@namenode02 ~]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/&lt;BR /&gt;[hdfs@namenode02 ~]$ hive&lt;BR /&gt;14/10/06 06:59:04 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.&lt;/P&gt;&lt;P&gt;Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties&lt;BR /&gt;hive&amp;gt; select count(*) from test;&lt;BR /&gt;Total MapReduce jobs = 1&lt;BR /&gt;Launching Job 1 out of 1&lt;BR /&gt;Number of reduce tasks determined at compile time: 1&lt;BR /&gt;In order to change the average load for a reducer (in bytes):&lt;BR /&gt;set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to limit the maximum number of reducers:&lt;BR /&gt;set hive.exec.reducers.max=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to set a constant number of reducers:&lt;BR /&gt;set mapred.reduce.tasks=&amp;lt;number&amp;gt;&lt;BR /&gt;Starting Job = job_local1939864979_0001, Tracking URL = http://localhost:8080/&lt;BR /&gt;Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_local1939864979_0001&lt;BR /&gt;Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0&lt;BR /&gt;2014-10-06 06:59:14,364 Stage-1 map = 0%, reduce = 100%&lt;BR /&gt;Ended Job = job_local1939864979_0001&lt;BR /&gt;MapReduce Jobs Launched:&lt;BR /&gt;Job 0: HDFS Read: 0 HDFS Write: 12904 SUCCESS&lt;BR /&gt;Total MapReduce CPU Time Spent: 0 msec&lt;BR /&gt;OK&lt;BR /&gt;0&lt;BR /&gt;Time taken: 4.095 seconds, Fetched: 1 row(s)&lt;BR /&gt;hive&amp;gt; exit;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;TEST two: this is default, it menas i didn't change anyting, just test when i am login OS by hdfs, it's failed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@datanode03 ~]$ hive&lt;BR /&gt;14/10/06 07:03:27 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive&lt;BR /&gt;14/10/06 07:03:27 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize&lt;BR /&gt;14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize&lt;BR /&gt;14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack&lt;BR /&gt;14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node&lt;BR /&gt;14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces&lt;BR /&gt;14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative&lt;BR /&gt;14/10/06 07:03:27 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.&lt;/P&gt;&lt;P&gt;Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties&lt;BR /&gt;hive&amp;gt; select count(*) from test&lt;BR /&gt;&amp;gt; ;&lt;BR /&gt;Total MapReduce jobs = 1&lt;BR /&gt;Launching Job 1 out of 1&lt;BR /&gt;Number of reduce tasks determined at compile time: 1&lt;BR /&gt;In order to change the average load for a reducer (in bytes):&lt;BR /&gt;set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to limit the maximum number of reducers:&lt;BR /&gt;set hive.exec.reducers.max=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to set a constant number of reducers:&lt;BR /&gt;set mapred.reduce.tasks=&amp;lt;number&amp;gt;&lt;BR /&gt;Starting Job = job_1412549128740_0004, Tracking URL = &lt;A target="_blank" href="http://namenode01.hadoop:8088/proxy/application_1412549128740_0004/"&gt;http://namenode01.hadoop:8088/proxy/application_1412549128740_0004/&lt;/A&gt;&lt;BR /&gt;Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0004&lt;BR /&gt;Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0&lt;BR /&gt;2014-10-06 07:03:53,523 Stage-1 map = 0%, reduce = 0%&lt;BR /&gt;Ended Job = job_1412549128740_0004 with errors&lt;BR /&gt;Error during job, obtaining debugging information...&lt;BR /&gt;FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask&lt;BR /&gt;MapReduce Jobs Launched:&lt;BR /&gt;Job 0: HDFS Read: 0 HDFS Write: 0 FAIL&lt;BR /&gt;Total MapReduce CPU Time Spent: 0 msec&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;TEST THREE: &amp;nbsp; it's failed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@namenode02 hadoop-yarn]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop-yarn/&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@namenode02 hadoop-yarn]$ hive&lt;BR /&gt;14/10/06 06:44:38 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.&lt;/P&gt;&lt;P&gt;Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties&lt;BR /&gt;hive&amp;gt; show tables;&lt;BR /&gt;FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient&lt;BR /&gt;hive&amp;gt; show tables;&lt;BR /&gt;OK&lt;BR /&gt;database_params&lt;BR /&gt;**bleep**you&lt;BR /&gt;sequence_table&lt;BR /&gt;tbls&lt;BR /&gt;test&lt;BR /&gt;test1&lt;BR /&gt;Time taken: 0.338 seconds, Fetched: 6 row(s)&lt;BR /&gt;hive&amp;gt; select count(*) from test;&lt;BR /&gt;Total MapReduce jobs = 1&lt;BR /&gt;Launching Job 1 out of 1&lt;BR /&gt;Number of reduce tasks determined at compile time: 1&lt;BR /&gt;In order to change the average load for a reducer (in bytes):&lt;BR /&gt;set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to limit the maximum number of reducers:&lt;BR /&gt;set hive.exec.reducers.max=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to set a constant number of reducers:&lt;BR /&gt;set mapred.reduce.tasks=&amp;lt;number&amp;gt;&lt;BR /&gt;Starting Job = job_1412549128740_0003, Tracking URL = &lt;A target="_blank" href="http://namenode01.hadoop:8088/proxy/application_1412549128740_0003/"&gt;http://namenode01.hadoop:8088/proxy/application_1412549128740_0003/&lt;/A&gt;&lt;BR /&gt;Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0003&lt;BR /&gt;Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0&lt;BR /&gt;2014-10-06 06:54:19,156 Stage-1 map = 0%, reduce = 0%&lt;BR /&gt;Ended Job = job_1412549128740_0003 with errors&lt;BR /&gt;Error during job, obtaining debugging information...&lt;BR /&gt;FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask&lt;BR /&gt;MapReduce Jobs Launched:&lt;BR /&gt;Job 0: HDFS Read: 0 HDFS Write: 0 FAIL&lt;BR /&gt;Total MapReduce CPU Time Spent: 0 msec&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;the conclusion is , just when i set the HADOOP_YARN_HOME with *0.20-", &amp;nbsp; it will be fine, so what can i do right now ?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 05 Oct 2014 23:11:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/19752#M3186</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2014-10-05T23:11:14Z</dc:date>
    </item>
    <item>
      <title>Re: HIVE: return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/19754#M3187</link>
      <description>&lt;P&gt;i have got to resolve this issue.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;at first i open debug model to check the details, but find nothing.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;then i am going to open namenode:8088 to check the history file and container details, but container log can't open, means container doesn't exist&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;at last, i am going to hdfs /user directory by HUE file browse, and open some logs, found log has recorded &amp;nbsp;/tmp/history is permission denied. &amp;nbsp; delete /tmp/history, and try again, it's ok now.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Oct 2014 00:49:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/19754#M3187</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2014-10-06T00:49:55Z</dc:date>
    </item>
    <item>
      <title>Re: HIVE: return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/25714#M3188</link>
      <description>&lt;P&gt;I am seeing the same error, and cannot figure out a solution. &amp;nbsp;I am using kerberos and sentry policy file. CDH 5.3&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO parse.ParseDriver: Parsing command: select id from firm
15/03/19 15:42:46 INFO parse.ParseDriver: Parse Completed
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=parse start=1426779766054 end=1426779766055 duration=1 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO conf.HiveAuthzConf: DefaultFS: hdfs://prodhadoop01-node-01:8020
15/03/19 15:42:46 INFO conf.HiveAuthzConf: DefaultFS: hdfs://prodhadoop01-node-01:8020
15/03/19 15:42:46 WARN mortbay.log: Using the deprecated config setting hive.sentry.server instead of sentry.hive.server
15/03/19 15:42:46 WARN mortbay.log: Using the deprecated config setting hive.sentry.provider instead of sentry.provider
15/03/19 15:42:46 WARN mortbay.log: Using the deprecated config setting hive.sentry.provider.resource instead of sentry.hive.provider.resource
15/03/19 15:42:46 INFO file.SimpleFileProviderBackend: Parsing /user/hive/sentry/sentry-provider.ini
15/03/19 15:42:46 INFO file.SimpleFileProviderBackend: Filesystem: hdfs://prodhadoop01-node-01:8020
15/03/19 15:42:46 INFO file.PolicyFiles: Opening /user/hive/sentry/sentry-provider.ini
15/03/19 15:42:46 INFO file.SimpleFileProviderBackend: Section databases needs no further processing
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Get metadata for source tables
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Get metadata for subqueries
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Get metadata for destination tables
15/03/19 15:42:46 INFO ql.Context: New scratch dir is hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-1
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Set stats collection dir : hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-1/-ext-10002
15/03/19 15:42:46 INFO ppd.OpProcFactory: Processing for FS(27)
15/03/19 15:42:46 INFO ppd.OpProcFactory: Processing for SEL(26)
15/03/19 15:42:46 INFO ppd.OpProcFactory: Processing for TS(25)
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=partition-retrieving start=1426779766137 end=1426779766137 duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner&amp;gt;
15/03/19 15:42:46 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable
15/03/19 15:42:46 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans
15/03/19 15:42:46 INFO parse.SemanticAnalyzer: Completed plan generation
15/03/19 15:42:46 INFO ql.Driver: Semantic Analysis Completed
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=semanticAnalyze start=1426779766055 end=1426779766138 duration=83 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO exec.ListSinkOperator: Initializing Self 28 OP
15/03/19 15:42:46 INFO exec.ListSinkOperator: Operator 28 OP initialized
15/03/19 15:42:46 INFO exec.ListSinkOperator: Initialization Done 28 OP
15/03/19 15:42:46 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:id, type:int, comment:null)], properties:null)
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=compile start=1426779766054 end=1426779766139 duration=85 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO lockmgr.DummyTxnManager: Creating lock manager of type org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
15/03/19 15:42:46 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=prodhadoop01-node-01:2181,prodhadoop01-node-04:2181,prodhadoop01-node-05:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager$DummyWatcher@1a65511f
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=acquireReadWriteLocks start=1426779766141 end=1426779766200 duration=59 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO ql.Driver: Starting command: select id from firm
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=PreHook.org.apache.sentry.binding.hive.HiveAuthzBindingPreExecHook from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=PreHook.org.apache.sentry.binding.hive.HiveAuthzBindingPreExecHook start=1426779766200 end=1426779766200 duration=0 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO ql.Driver: Total jobs = 1
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=TimeToSubmit start=1426779766141 end=1426779766201 duration=60 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:42:46 INFO ql.Driver: Launching Job 1 out of 1
15/03/19 15:42:46 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
15/03/19 15:42:46 INFO ql.Context: New scratch dir is hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-6
15/03/19 15:42:46 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
15/03/19 15:42:46 INFO mr.ExecDriver: adding libjars: file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hive/lib/hive-hbase-handler-0.13.1-cdh5.2.4.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/lib/htrace-core.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/lib/htrace-core-2.04.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-hadoop-compat.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-server.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-common.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-protocol.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-hadoop2-compat.jar,file:///opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hbase/hbase-client.jar
15/03/19 15:42:46 INFO exec.Utilities: Processing alias firm
15/03/19 15:42:46 INFO exec.Utilities: Adding input file hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm
15/03/19 15:42:46 INFO exec.Utilities: Content Summary not cached for hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm
15/03/19 15:42:46 INFO ql.Context: New scratch dir is hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-6
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities&amp;gt;
15/03/19 15:42:46 INFO exec.Utilities: Serializing MapWork via kryo
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=serializePlan start=1426779766217 end=1426779766247 duration=30 from=org.apache.hadoop.hive.ql.exec.Utilities&amp;gt;
15/03/19 15:42:46 INFO client.RMProxy: Connecting to ResourceManager at prodhadoop01-node-01/10.0.2.156:8032
15/03/19 15:42:46 INFO client.RMProxy: Connecting to ResourceManager at prodhadoop01-node-01/10.0.2.156:8032
15/03/19 15:42:46 INFO exec.Utilities: No plan file found: hdfs://prodhadoop01-node-01:8020/tmp/hive-hive/hive_2015-03-19_15-42-46_054_9213739680141366916-6/-mr-10004/2c0837da-5c34-4f5f-8685-8aaa2712d2dc/reduce.xml
15/03/19 15:42:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 187 for hive on 10.0.2.156:8020
15/03/19 15:42:46 INFO security.TokenCache: Got dt for hdfs://prodhadoop01-node-01:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 10.0.2.156:8020, Ident: (HDFS_DELEGATION_TOKEN token 187 for hive)
15/03/19 15:42:46 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat&amp;gt;
15/03/19 15:42:46 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm; using filter path hdfs://prodhadoop01-node-01:8020/user/hive/warehouse/pod03_ema.db/firm
15/03/19 15:42:46 INFO input.FileInputFormat: Total input paths to process : 4
15/03/19 15:42:46 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 4, size left: 0
15/03/19 15:42:46 INFO io.CombineHiveInputFormat: number of splits 2
15/03/19 15:42:46 INFO log.PerfLogger: &amp;lt;/PERFLOG method=getSplits start=1426779766808 end=1426779766821 duration=13 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat&amp;gt;
15/03/19 15:42:46 INFO mapreduce.JobSubmitter: number of splits:2
15/03/19 15:42:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1426777422106_0005
15/03/19 15:42:46 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 10.0.2.156:8020, Ident: (HDFS_DELEGATION_TOKEN token 187 for hive)
15/03/19 15:42:47 INFO impl.YarnClientImpl: Submitted application application_1426777422106_0005
15/03/19 15:42:47 INFO mapreduce.Job: The url to track the job: https://prodhadoop01-node-01:8090/proxy/application_1426777422106_0005/
15/03/19 15:42:47 INFO exec.Task: Starting Job = job_1426777422106_0005, Tracking URL = https://prodhadoop01-node-01:8090/proxy/application_1426777422106_0005/
15/03/19 15:42:47 INFO exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.2.4-1.cdh5.2.4.p0.3/lib/hadoop/bin/hadoop job  -kill job_1426777422106_0005
15/03/19 15:43:01 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
15/03/19 15:43:01 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/03/19 15:43:01 INFO exec.Task: 2015-03-19 15:43:01,246 Stage-1 map = 0%,  reduce = 0%
15/03/19 15:43:01 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/03/19 15:43:01 ERROR exec.Task: Ended Job = job_1426777422106_0005 with errors
15/03/19 15:43:01 INFO impl.YarnClientImpl: Killed application application_1426777422106_0005
15/03/19 15:43:01 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
15/03/19 15:43:01 INFO log.PerfLogger: &amp;lt;/PERFLOG method=Driver.execute start=1426779766200 end=1426779781272 duration=15072 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:43:01 INFO ql.Driver: MapReduce Jobs Launched: 
15/03/19 15:43:01 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
15/03/19 15:43:01 INFO ql.Driver: Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
15/03/19 15:43:01 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
15/03/19 15:43:01 INFO log.PerfLogger: &amp;lt;PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:43:01 INFO ZooKeeperHiveLockManager:  about to release lock for pod03_ema/firm
15/03/19 15:43:01 INFO ZooKeeperHiveLockManager:  about to release lock for pod03_ema
15/03/19 15:43:01 INFO log.PerfLogger: &amp;lt;/PERFLOG method=releaseLocks start=1426779781273 end=1426779781289 duration=16 from=org.apache.hadoop.hive.ql.Driver&amp;gt;
15/03/19 15:43:01 ERROR operation.Operation: Error running hive query: 
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:147)
	at org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:69)
	at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:200)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
	at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
	at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:213)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)&lt;/PRE&gt;</description>
      <pubDate>Thu, 19 Mar 2015 17:44:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/25714#M3188</guid>
      <dc:creator>ngealy</dc:creator>
      <dc:date>2015-03-19T17:44:04Z</dc:date>
    </item>
    <item>
      <title>Re: HIVE: return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/42286#M3189</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Greetings&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;If by chance u are still looking to resolve&amp;nbsp; a return code 2 error while tunning hive, I may have a solution for u if u dont get any information from the log files.&amp;nbsp; Return code 2 is basically a camoflauge for an hadoop/yarn memory problem.&amp;nbsp; Basically, not enough resources configured into hadoop/yarn to run your projects&amp;nbsp;&amp;nbsp; If u are running a single-node cluster ..see the link below&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;A href="http://stackoverflow.com/questions/26540507/what-is-the-maximum-containers-in-a-single-node-cluster-hadoop" target="_blank"&gt;http://stackoverflow.com/questions/26540507/what-is-the-maximum-containers-in-a-single-node-cluster-hadoop&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;U may be able to tweak the settings depending on your cluster setup.&amp;nbsp; If this does not cure your problem 100%, then at least the return code 2 or exit code 1 errors would disappear.&amp;nbsp;&amp;nbsp; Hope this helps&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 24 Jun 2016 18:12:37 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HIVE-return-code-2-from-org-apache-hadoop-hive-ql-exec-mr/m-p/42286#M3189</guid>
      <dc:creator>FIBERNACHI</dc:creator>
      <dc:date>2016-06-24T18:12:37Z</dc:date>
    </item>
  </channel>
</rss>

