Expert Contributor
Posts: 87
Registered: ‎06-16-2014

Failed redirect for container

Hello , I am new to HIVE . I have installed the HIVE services via Cloudera manager 5.0.2 .

In Hive , I have just created two tables and tried to run the below query and i started getting errors.


select t1.filepath from test t1 join test4 t2 on t1.filepath = t2.filepath;


this is the output when i run the join query.


Total MapReduce jobs = 1
14/09/09 08:37:54 WARN conf.Configuration: file:/tmp/hdfs/hive_2014-09-09_08-37-53_099_6910109243210788373-1/-local-10005/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/09/09 08:37:55 WARN conf.Configuration: file:/tmp/hdfs/hive_2014-09-09_08-37-53_099_6910109243210788373-1/-local-10005/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/09/09 08:37:55 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/09/09 08:37:55 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/09/09 08:37:55 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/09/09 08:37:55 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/09/09 08:37:55 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/09/09 08:37:55 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/09/09 08:37:55 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/09/09 08:37:55 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
Execution log at: /tmp/hdfs/hdfs_20140909083737_5bf3f435-46e5-478f-832d-e179327d9493.log
2014-09-09 08:37:55     Starting to launch local task to process map join;      maximum memory = 257949696
2014-09-09 08:37:56     Dump the side-table into file: file:/tmp/hdfs/hive_2014-09-09_08-37-53_099_6910109243210788373-1/-local-10002/HashTable-Stage-3/MapJoin-mapfile51--.hashtable
2014-09-09 08:37:56     Upload 1 File to: file:/tmp/hdfs/hive_2014-09-09_08-37-53_099_6910109243210788373-1/-local-10002/HashTable-Stage-3/MapJoin-mapfile51--.hashtable
2014-09-09 08:37:56     End of local task; Time Taken: 0.59 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1410230271763_0004, Tracking URL = N/A
Kill Command = /opt/cloudera/parcels/CDH-5.0.2-1.cdh5.0.2.p0.13/lib/hadoop/bin/hadoop job  -kill job_1410230271763_0004
Hadoop job information for Stage-3: number of mappers: 0; number of reducers: 0
2014-09-09 08:38:12,780 Stage-3 map = 0%,  reduce = 0%
Ended Job = job_1410230271763_0004 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from
MapReduce Jobs Launched:
Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

When i check the Jobtracker log I see the below message.

Failed while trying to construct the redirect url to the log server. Log Server url may not be configured Container does not exist.


.....................Please help...


Posts: 416
Topics: 51
Kudos: 82
Solutions: 49
Registered: ‎06-26-2013

Re: Failed redirect for container

@Balakumar90 I feel like this issue may be more related to your MapReduce configuration, so I've moved it to that board hoping someone here can help you.





Posts: 1,730
Kudos: 357
Solutions: 274
Registered: ‎07-31-2013

Re: Failed redirect for container

Checkup the job's actual failed tasks via your JobHistoryServer (typically runs on the port 19888). It should bring up a more familiar JT-like UI for your MR2 jobs and should help you find a more precise error on why the job failed.