- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Spark-shell in Yarn mode gets stuck
- Labels:
-
Apache Spark
-
Apache YARN
-
Cloudera Manager
Created on ‎10-30-2018 08:11 AM - edited ‎09-16-2022 06:51 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone,
I have recently installed CM 6.0.1 with a cluster of two nodes. When I type spark-shell, in order to open spark it gets stuck at this first message
"Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel)."
When I run spark-shell --master local , it works well, so I suppose it is yarn configuration problem.
By the way, according to CM, all the roles in Spark and Yarn are working fine.
I suppose it is a problem of configuration about memory in the different roles of Yarn, but I do not know ho to manage.
If it useful, the node which has CM installed has 15 GB of RAM an the other one 10.
Can anyone help?
How should I customize yarn.app.mapreduce.am.resource.mb, Tamaño de montón máximo Java de ApplicationMaster, yarn.nodemanager.resource.memory-mb etc etc.??
Thanks a lot
Miki
Created ‎11-11-2018 10:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You may need to increase:
yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb
the default one could be too small to launch a default spark executor container ( 1024MB + 512 overhead).
You may also want to enable INFO logging for the spark shell to understand what exact error/warn it has:
/etc/spark/conf/log4j.properties
