Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

mapreduce job stuck at map 0% reduce 0%

avatar

mapreduce job created by the sqoop task is stuck at 0%  , any ideas what i am doing wrong?

2019-08-12 07:12:20,253 INFO [Socket Reader #1 for port 37566] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 37566
2019-08-12 07:12:20,256 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-08-12 07:12:20,257 INFO [IPC Server listener on 37566] org.apache.hadoop.ipc.Server: IPC Server listener on 37566: starting
2019-08-12 07:12:20,281 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2019-08-12 07:12:20,281 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2019-08-12 07:12:20,281 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2019-08-12 07:12:20,284 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: 0% of the mappers will be scheduled using OPPORTUNISTIC containers
2019-08-12 07:12:20,311 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at vm-cloudera-6x.dag.com/192.168.42.44:8030
2019-08-12 07:12:20,468 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: maxContainerCapability: <memory:8192, vCores:2>
2019-08-12 07:12:20,468 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: queue: root.users.cloudera
2019-08-12 07:12:20,473 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2019-08-12 07:12:20,473 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2019-08-12 07:12:20,497 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1565618944746_0001Job Transitioned from INITED to SETUP
2019-08-12 07:12:20,522 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2019-08-12 07:12:20,575 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1565618944746_0001Job Transitioned from SETUP to RUNNING
2019-08-12 07:12:20,605 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1565618944746_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2019-08-12 07:12:20,607 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1565618944746_0001_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2019-08-12 07:12:20,619 INFO [Thread-58] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:4096, vCores:2>
2019-08-12 07:12:20,627 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1565618944746_0001, File: hdfs://vm-cloudera-6x.dag.com:8020/user/cloudera/.staging/job_1565618944746_0001/job_1565618944746_0...
2019-08-12 07:12:21,476 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2019-08-12 07:12:21,542 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1565618944746_0001: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:5120, vCores:1> knownNMs=1

        
2 REPLIES 2

avatar
Super Guru

Check you RM and NM logs. How much resources have you given the NM? Does the NM have enough resources for your task to run (5GB RAM, 1 vcore)?

 

If the NM and RM logs don't show much, enable DEBUG logging and run the same thing again.

 

Regards,

André

 

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Super Guru
Go to RM web UI to see the amount of resources you have in your cluster and check if your job requires more than that. This can confirm you are out of resources.

Cheers
Eric