Support Questions

Find answers, ask questions, and share your expertise

Cloudera Data Science Workbench - Memory usage

avatar
Explorer

Hello.

I have to support a cluster with Cloudera Data Science Workbench (v1.6), that is installed across two nodes (each with 60GB of memory).

I need to understand how CDSW use Docker and Kubernetes. I can see that there are a lot of containers and k8s pods running in the background.
Is it possible to automate the removal of idle containers?
How can I monitor the memory consumption of each node and each container?

CDSW v1.9 has a Grafana Dashboard  for monitoring. Can I get these metrics by a CLI?

1 ACCEPTED SOLUTION

avatar
Master Guru

@gfranco In CDSW almost every POD is serving it's own purpose. The only thing which is possible is scale/de-scale the Web PODs if needed. 

 

Regarding the monitoring this is running K8s in the heart so you can use K8s command to watch the usages. 

Here is a good discussion about such commands/utility: https://github.com/kubernetes/kubernetes/issues/17512

 

From CDSW public doc you can refer this: https://docs.cloudera.com/cdsw/1.9.1/monitoring/topics/cdsw-monitoring.html?


Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

View solution in original post

1 REPLY 1

avatar
Master Guru

@gfranco In CDSW almost every POD is serving it's own purpose. The only thing which is possible is scale/de-scale the Web PODs if needed. 

 

Regarding the monitoring this is running K8s in the heart so you can use K8s command to watch the usages. 

Here is a good discussion about such commands/utility: https://github.com/kubernetes/kubernetes/issues/17512

 

From CDSW public doc you can refer this: https://docs.cloudera.com/cdsw/1.9.1/monitoring/topics/cdsw-monitoring.html?


Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.