I am trying to run a simple Spark job on our hortonwork cluster. I just see from metric that before running my job. there is for example, 310 from 640 gig memory usage. However, when I run my job, it becomes 324 from 440 gig.
My job just takes 12 gig from cluster but not sure while total available memory decreases when I run the job.
it was first as 324 at 640..then become 324 from 440, 324 from 320..and then becomes 324 from 328..the metric reported by Ambari dashboard in Yarn memory panel.