Member since
05-18-2016
3
Posts
2
Kudos Received
0
Solutions
06-24-2016
11:12 AM
1 Kudo
Greetings If by chance u are still looking to resolve a return code 2 error while tunning hive, I may have a solution for u if u dont get any information from the log files. Return code 2 is basically a camoflauge for an hadoop/yarn memory problem. Basically, not enough resources configured into hadoop/yarn to run your projects If u are running a single-node cluster ..see the link below http://stackoverflow.com/questions/26540507/what-is-the-maximum-containers-in-a-single-node-cluster-hadoop U may be able to tweak the settings depending on your cluster setup. If this does not cure your problem 100%, then at least the return code 2 or exit code 1 errors would disappear. Hope this helps
... View more
06-24-2016
11:07 AM
1 Kudo
Greetings If by chance u are still looking to resolve a return code 2 error while tunning hive, I may have a solution for u if u dont get any information from the log files. Return code 2 is basically a camoflauge for an hadoop/yarn memory problem. Basically, not enough resources configured into hadoop/yarn to run your projects If u are running a single-node cluster ..see the link below http://stackoverflow.com/questions/26540507/what-is-the-maximum-containers-in-a-single-node-cluster-hadoop U may be able to tweak the settings depending on your cluster setup. If this does not cure your problem 100%, then at least the return code 2 or exit code 1 errors would disappear. Hope this helps
... View more
05-18-2016
11:18 AM
Greetings I am having the same problem I increased the heapsize as requested and my Hive/MR job fails hive> set mapreduce.map.memory.mb=2048; hive> set mapreduce.reduce.memory.mb=4096; hive> select count(distinct warctype) from commoncrawl18 where warctype='warcinfo'; Query ID = jmill383_20160518141345_91d2a202-049e-4546-a9f7-e7183f2ff4bf Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1463594979064_0001, Tracking URL = http://starchild:8088/proxy/application_1463594979064_0001/ Kill Command = /opt/hadoop/bin/hadoop job -kill job_1463594979064_0001 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2016-05-18 14:13:52,640 Stage-1 map = 0%, reduce = 0% Ended Job = job_1463594979064_0001 with errors Error during job, obtaining debugging information... Job Tracking URL: http://starchild:8088/cluster/app/application_1463594979064_0001 FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec Is there another alternative remedy for this? Please advise John M
... View more