As my question is saying. Lets say I'm submiting spark-job like this:
spark-submit --class streaming.test --master yarn --deploy-mode cluster --name some_name --executor-memory 512m --executor-cores 1 --driver-memory 512m some.jar
The job is submited and it is running as you can see here:
But as you can see, I gave to job 512MB of RAM, YARN gave 3GB and it is happening for every Spark job I'm submitting. Can someone lead me where I'm mistaking?
I have 3 RMs. and yarn.scheduler.minimum-alocation-mb is set to 1024. Is that because this 1024 *(num of of RM) ?
The 3GB is the total memory across all containers. 4 apps in that screenshot show 3GB because they have 3 running containers. If you see the app in the 3rd row you will see 1 container only and hence 1024mb.
Actually I did so, kill the app, submit the same app with same config and he took 3GB again. I'll give it a shot again and give u feedback asap
I really have feelling like YARN is overriding parameters I'm passing. Also I tried to set --num-executors to 2, he set 3 as you can see on the first picture above
One container is always the AM (application master), that's why it is 3. Can you click on the application ID in the first row, and then click on the attempt ID link and then on each of the 3 container ID links to see how much memory each container is taking?
yarn.scheduler.minimum-alocation-mb to 256MB
Spark submit configs now are following:
--executor-memory 256m --executor-cores 1 --num-executors 1 --driver-memory 512m
I need it to set --driver-memory to 512MB since application wouldn't start. So, with this configs application is taking 2 GB of RAM and as you were asking => Job is as You assume across 2 containers and each is taking 1024MB
In INFO of Spark job I can see this:
17/10/30 17:57:10 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead