Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

jpb submitted to mapreduce in Yarn is stuck while ingesting data using SQOOP

avatar
Explorer

I am ingesting data in CDH5 hdfs using SQOOP using mysql. The job is submitted to mapreduce, but there is no activity after I get mapreduce job id:

 

INFO mapreduce.JobSubmitter: number of splits:1
INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1500040023027_0002
INFO impl.YarnClientImpl: Submitted application application_1500040023027_0002
INFO mapreduce.Job: The url to track the job: http://pc1.localdomain.com:8088/proxy/application_1500040023027_0002/
 INFO mapreduce.Job: Running job: job_1500040023027_0002

I have set up CDH5 on RHEL using cluster setup, but I have only one pc in cluster. I do see warnings to have atleast 3 datanodes, but I think it should not be an issue if I am not runninng huge activity.

 

Screenshot from 2017-07-14 12-23-41.png

I have also set the namenode and secondary namenode memory size to be 4GB. The block memory size is set to 64Mb. The log file size is also taken care of by setting them to 2GB minimum.

In Yarn settings, I have set root, and default  min and max cores to be 1 and 4, and min /max  memory to be 1 and 4 Gb

 

mapreduce screenshot shows that 0 VC and memory has been assigned to it.

Screenshot from 2017-07-14 19-47-13.png

Can somebody point me how to make it working.

1 ACCEPTED SOLUTION

avatar
Explorer
13 REPLIES 13

avatar
Champion

Why is you cluster has  "red" I am suspecting some disk space I am only guessing just run the host health check also 

whats the parameter you had put in 

yarn.nodemanager.resource.memory-mb
yarn.scheduler.minimum-allocation-mb<
mapreduce.map.memory.mb
mapreduce.reduce.memory.mb

 

avatar
Explorer

 

 

 <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>

 

 

in /hadoop/ conf.cloudera.yarn/map-redsite.xml it the following are set to 0, so I changed them to what is in /haddop/conf/map-redsite.xm

 

  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>mapreduce.map.cpu.vcores</name>
    <value>1</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>1024</value>
  </property>

I could not find the parameter

yarn.nodemanager.resource.memory-mb

 

It is still at same point, no progress.

 

 

 

 

avatar
Explorer

avatar
Champion

As pointed mostly like it will be the resource allocation on those parameter good that you found it in that thread .I was close to narrow it down . Guess what sometimes it could be some socket configuration on the OS too . The logs will clearly guide us to it .