Created 10-30-2017 01:35 PM
Hi,
As my question is saying. Lets say I'm submiting spark-job like this:
spark-submit --class streaming.test --master yarn --deploy-mode cluster --name some_name --executor-memory 512m --executor-cores 1 --driver-memory 512m some.jar
The job is submited and it is running as you can see here:
screenshot-6.jpg
But as you can see, I gave to job 512MB of RAM, YARN gave 3GB and it is happening for every Spark job I'm submitting. Can someone lead me where I'm mistaking?
UPDATE:
I have 3 RMs. and yarn.scheduler.minimum-alocation-mb is set to 1024. Is that because this 1024 *(num of of RM) ?
Created 11-03-2017 04:45 PM
Spark Client is overriding the AM memory from 512mb to 896MB. Can you check Spark AM logs and see if the AM is overriding the container memory from 256MB to the higher value?
Created 11-03-2017 08:49 PM
Spark AM logs? Can you lead me please? :S
Created 01-31-2018 10:06 AM
Can you please help me?
Created 01-31-2018 05:22 PM
In the RM UI, you can click on the app id link for a spark job and follow the app-attempt link and then click on the logs link against the first container (typically the one ending with 0001). Check the AM logs there and see what you find.