Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark-shell in Yarn mode gets stuck

avatar
New Contributor

Hi everyone,

I have recently installed CM 6.0.1 with a cluster of two nodes. When I type spark-shell, in order to open spark it gets stuck at this first message

"Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel)."

When I run spark-shell --master local , it works well, so I suppose it is  yarn configuration problem.

By the way, according to CM, all the roles in Spark and Yarn are working fine.

I suppose it is a problem of configuration about memory in the different roles of Yarn, but I do not know ho to manage.

If it useful, the node which has CM installed has 15 GB of RAM an the other one 10.

Can anyone help?

How should I customize yarn.app.mapreduce.am.resource.mb, Tamaño de montón máximo Java de ApplicationMaster, yarn.nodemanager.resource.memory-mb etc etc.??

 

Thanks a lot

 

Miki

1 REPLY 1

avatar
Expert Contributor

You may need to increase: 

yarn.nodemanager.resource.memory-mb

yarn.scheduler.maximum-allocation-mb

 

the default one could be too small to launch a default spark executor container ( 1024MB + 512 overhead).

 

You may also want to enable INFO logging for the spark shell to understand what exact error/warn it has:

/etc/spark/conf/log4j.properties