Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

how to increase spark's worker memory from CM

avatar
Expert Contributor

i am working on a cluster of 1 master and 2 workers , when i open spark master UI 

Memory: 128.0 MB Total

each worker memory 64.0 MB (0.0 B Used) 

while each worker has 4 GB memory in total and 2.5 GB free memory   , so i want to increase the worker memory instead of 64 to make it 1 GB , i tried to find spark-defaults.conf in /etc/spark/conf.cloudera.spark   but it doesn't exist , i am running cloudera manager 5.0.2 , i guess there might be a configuration that can be done to increase the memory from cloudera manger => spark service , but i can't find it 

 

Thanks in advance 

1 ACCEPTED SOLUTION

avatar
Look in the Resource Management category on the left of the configuration page. Each role group (there's usually one for each role type) has this subcategory.

You can also search for "memory" or "heap" using the search bar on the left.

Editing /etc/spark/conf won't help you since the daemon roles don't read that file. CM daemons don't read from /etc, but instead have their own independent configuration directories.
http://blog.cloudera.com/blog/2013/07/how-does-cloudera-manager-work/

Thanks,
Darren

View solution in original post

2 REPLIES 2

avatar
Look in the Resource Management category on the left of the configuration page. Each role group (there's usually one for each role type) has this subcategory.

You can also search for "memory" or "heap" using the search bar on the left.

Editing /etc/spark/conf won't help you since the daemon roles don't read that file. CM daemons don't read from /etc, but instead have their own independent configuration directories.
http://blog.cloudera.com/blog/2013/07/how-does-cloudera-manager-work/

Thanks,
Darren

avatar
Expert Contributor

thanks so much !!