Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Runnign Saprk jobs on yarn and the amount of memeory theu use

Highlighted

Runnign Saprk jobs on yarn and the amount of memeory theu use

New Contributor

Hi,

I am trying to run a simple Spark job on our hortonwork cluster. I just see from metric that before running my job. there is for example, 310 from 640 gig memory usage. However, when I run my job, it becomes 324 from 440 gig.

My job just takes 12 gig from cluster but not sure while total available memory decreases when I run the job.

it was first as 324 at 640..then become 324 from 440, 324 from 320..and then becomes 324 from 328..the metric reported by Ambari dashboard in Yarn memory panel.

2 REPLIES 2

Re: Runnign Saprk jobs on yarn and the amount of memeory theu use

Super Guru

have you looked at the spark ui, spark history ui and YARN UI?

is anyone else using the cluster?

can you post some logs?

can you post a screen shot?

Often other jobs are running, including backups, Hive queries, zeppelin and other items that can take resources. It depends on versions, configuration and more.

What version of HDP? Spark? YARN?

Re: Runnign Saprk jobs on yarn and the amount of memeory theu use

Super Guru