Support Questions

Find answers, ask questions, and share your expertise

SPARK job taking more memory then it is given

avatar
Expert Contributor

Hi,

As my question is saying. Lets say I'm submiting spark-job like this:

spark-submit --class streaming.test --master yarn --deploy-mode cluster --name some_name  --executor-memory 512m --executor-cores 1 --driver-memory 512m some.jar

The job is submited and it is running as you can see here:

screenshot-6.jpg

But as you can see, I gave to job 512MB of RAM, YARN gave 3GB and it is happening for every Spark job I'm submitting. Can someone lead me where I'm mistaking?

UPDATE:
I have 3 RMs. and yarn.scheduler.minimum-alocation-mb is set to 1024. Is that because this 1024 *(num of of RM) ?

13 REPLIES 13

avatar
Expert Contributor

Spark Client is overriding the AM memory from 512mb to 896MB. Can you check Spark AM logs and see if the AM is overriding the container memory from 256MB to the higher value?

avatar
Expert Contributor

Spark AM logs? Can you lead me please? :S

avatar
Expert Contributor

@Gour Saha

Can you please help me?

avatar
Expert Contributor

In the RM UI, you can click on the app id link for a spark job and follow the app-attempt link and then click on the logs link against the first container (typically the one ending with 0001). Check the AM logs there and see what you find.