Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Who agreed with this topic

Yarn applications hang foreever if run in parallel

avatar
Explorer

Hi,

 

we have a cluster with 8 nodes on CDH5 (5.0.2) with Yarn MRv2 in use and a big problem which is probably due to the Config.

 

In addition to Hadoop, we also use Imapala so we can not use give all ressoures to yarn.

Each of our nodes have 128GB of RAM and 12 cores.

 

Currently sees the Memory config for Yarn as follows:

 

mapreduce.map.memory.mb = 8Gib

mapreduce.reduce.memory.mb = 8Gib

yarn.app.mapreduce.am.resource.mb = 8Gib

mapreduce.map.java.opts.max.heap = 6960MiB

mapreduce.reduce.java.opts.max.heap = 6960MiB

"Java Heap Size in bytes of NodeManager" = 8Gib

yarn.nodemanager.resource.memory-mb = 80Gib

 

Now we get the problem that if we run multiple applications in parallel all stop and no one finished.

it looks as if they hang forever. I see no exception or errors in "/var/log/hadoop-yarn" (Debug Log Level).

 

I would be glad if someone can help? 🙂

 

BG

Who agreed with this topic