Member since
08-07-2014
2
Posts
0
Kudos Received
0
Solutions
03-04-2016
03:28 AM
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1455546410616_13085, Tracking URL = http://ndrm:8088/proxy/application_1455546410616_13085/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1455546410616_13085
Hadoop job information for Stage-1: number of mappers: 7; number of reducers: 0
2016-03-03 13:39:54,224 Stage-1 map = 0%, reduce = 0%
2016-03-03 13:40:04,733 Stage-1 map = 57%, reduce = 0%, Cumulative CPU 13.0 sec
2016-03-03 13:40:26,943 Stage-1 map = 86%, reduce = 0%, Cumulative CPU 112.9 sec
2016-03-03 13:40:30,114 Stage-1 map = 96%, reduce = 0%, Cumulative CPU 142.98 sec
2016-03-03 13:40:48,010 Stage-1 map = 86%, reduce = 0%, Cumulative CPU 104.61 sec
2016-03-03 13:41:22,610 Stage-1 map = 96%, reduce = 0%, Cumulative CPU 142.05 sec
2016-03-03 13:41:40,425 Stage-1 map = 86%, reduce = 0%, Cumulative CPU 104.61 sec
2016-03-03 13:42:16,026 Stage-1 map = 96%, reduce = 0%, Cumulative CPU 143.26 sec
2016-03-03 13:42:34,857 Stage-1 map = 86%, reduce = 0%, Cumulative CPU 104.61 sec
2016-03-03 13:43:09,393 Stage-1 map = 96%, reduce = 0%, Cumulative CPU 144.34 sec
2016-03-03 13:43:28,197 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 104.61 sec
MapReduce Total cumulative CPU time: 1 minutes 44 seconds 610 msec
Ended Job = job_1455546410616_13085 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1455546410616_13085_m_000003 (and more) from job job_1455546410616_13085
Task with the most failures(4):
-----
Task ID:
task_1455546410616_13085_m_000000
URL:
http://ndrm:8088/taskdetails.jsp?jobid=job_1455546410616_13085&tipid=task_1455546410616_13085_m_000000
-----
Diagnostic Messages for this Task:
Error: Java heap space
Increasing the JVM memory and the map memory allocated by the container helped for me . below are the values used: hive> set mapreduce.map.memory.mb=4096; hive >set mapreduce.map.java.opts=-Xmx3600M; Incase you still get the Java heap error , try increasing to higher values, but make sure that the mapreduce.map.java.opts doesnt exceed mapreduce.map.memory.mb. well in case of tez you may have to set set hive.tez.java.opts=-Xmx3600M; Thanks
... View more