Support Questions

Find answers, ask questions, and share your expertise

map/reduce stuck at 0%

avatar
Contributor

I have ran into an issue. I am getting hive prompt and also running basic hive queries which did'nt execute MR job at backend. but when i ran query which execute MR job at backend it hang up with no further progress(no mapper/reducer progress).

I have checked REsource manager queue, it looks ok as the container is allocated to the query only.

Also i have checked my MapReduce2 is up and running.

can anybody suggest what needs to be done in this case?

5 REPLIES 5

avatar
Super Guru
@Tajinderpal Singh

A job stuck inacceptedstate on YARN is usually because of free resources are not enough. You can check it athttp://resourcemanager:port/cluster/scheduler:

  1. ifMemory Used + Memory Reserved >= Memory Total, memory is not enough
  2. ifVCores Used + VCores Reserved >= VCores Total, VCores is not enough

It may also be limited by parameters such asmaxAMShare

Follow the blog -http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

This describes in detail how to set the parameters for YARN containers.

Check below parameters -

1) yarn-site.xml

  • yarn.resourcemanager.hostname = hostname_of_the_master
  • yarn.nodemanager.resource.memory-mb = 4000
  • yarn.nodemanager.resource.cpu-vcores = 2
  • yarn.scheduler.minimum-allocation-mb = 4000

2) mapred-site.xml

  • yarn.app.mapreduce.am.resource.mb = 4000
  • yarn.app.mapreduce.am.command-opts = -Xmx3768m
  • mapreduce.map.cpu.vcores = 2
  • mapreduce.reduce.cpu.vcores = 2

avatar
Super Guru

@Tajinderpal Singh You can also use the below script recommended by HWX -

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/determi...

You can try this to get recommendations as per your cluster resources.

avatar
Contributor

can you tell me the recommended setting for my cluster.

I have 3 nodes each dual core. 1 node with 12 GB RAM and other two with 6 GB RAM

avatar
Super Guru

It seems to be a test cluster and has very less Memory. I will suggest to go with default configuration where mapper size will be 1Gb and reducer will be 2Gb. This si taken care by ambari default.

avatar
Explorer

Someone know why my Pig/ Hive job are not working?

I follow HDP (automation install) in one vmware with 16GB, everything is fine (all services are green in Ambari)

But when I try to use Pig/ Hive in Ambari View, they are all failed stopped at 0% complete.

I also try to login the terminal and manually run Pig/ example wordcount, the problem is the same.

Someone have the same problem like this?

Does One VM is not enough for Ambari hdp cluster? (if true, why sandbox is fine? )

Thanks