Support Questions

Find answers, ask questions, and share your expertise

Allocating more than 50% of memory in cdsw

avatar
Contributor

Hello,

 

is it possible for one single user to allocate more than 50% of memory on cdsw because I'm not able to start a session with more than 50% of memory?

Actually this makes sense if this is a limitation from cdsw but most of the time I use cdsw alone in our company and I wanted to ask if there is a parameter which I can set to allocate more memory.

 

Thanks.

1 ACCEPTED SOLUTION

avatar
Master Collaborator

Hello @Baris 

 

There is no such limitations from CDSW. If a node has spare resources - kubernetes could use that node to launch the pod.

 

May I ask how many nodes are there in your CDSW cluster? What is the CPU and Memory footprint on each node, what version of CDSW are you running? And what error you are getting when launching the session with > 50% memory?

 

You can find out how much spare resources are there cluster wide using the CDSW homepage (Dashboard). If you want to find out exactly how much spare resources are there on each node, you can find that out by running $ kubectl describe node on the CDSW master server.

 

Example: In the snip below you can see that out of 4CPU (4000m), 3330m was used and similarly out of 8GB RAM,  around 6.5 GB was used. This means if you try to launch a session with 1CPU or 2GB RAM it will not work.

$ kubectl  describe nodes
Name:               host-aaaa
Capacity:
 cpu:     4
 memory:  8009452Ki
Allocatable:
 cpu:     4
 memory:  8009452Ki
Allocated: 
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  3330m (83%)   0 (0%)      6482Mi (82%)     22774Mi (291%)

 

Do note that a session can only spin an engine pod on one node. This means for eg if you have three nodes with 2 GB RAM left on each of them, it might give you an assumption that you've 6GB of free RAM and that you can launch a session with 6GB memory but because a session can't share resources across nodes you'd eventually see an error something like this "Unschedulable: No nodes are available that match all of the predicates: Insufficient memory (3)"

View solution in original post

2 REPLIES 2

avatar
Master Collaborator

Hello @Baris 

 

There is no such limitations from CDSW. If a node has spare resources - kubernetes could use that node to launch the pod.

 

May I ask how many nodes are there in your CDSW cluster? What is the CPU and Memory footprint on each node, what version of CDSW are you running? And what error you are getting when launching the session with > 50% memory?

 

You can find out how much spare resources are there cluster wide using the CDSW homepage (Dashboard). If you want to find out exactly how much spare resources are there on each node, you can find that out by running $ kubectl describe node on the CDSW master server.

 

Example: In the snip below you can see that out of 4CPU (4000m), 3330m was used and similarly out of 8GB RAM,  around 6.5 GB was used. This means if you try to launch a session with 1CPU or 2GB RAM it will not work.

$ kubectl  describe nodes
Name:               host-aaaa
Capacity:
 cpu:     4
 memory:  8009452Ki
Allocatable:
 cpu:     4
 memory:  8009452Ki
Allocated: 
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  3330m (83%)   0 (0%)      6482Mi (82%)     22774Mi (291%)

 

Do note that a session can only spin an engine pod on one node. This means for eg if you have three nodes with 2 GB RAM left on each of them, it might give you an assumption that you've 6GB of free RAM and that you can launch a session with 6GB memory but because a session can't share resources across nodes you'd eventually see an error something like this "Unschedulable: No nodes are available that match all of the predicates: Insufficient memory (3)"

avatar
Contributor

Thank you for the great explanation @AutoIN. This solved my problem.

On our CDSW cluster we have 2 nodes with a master and a slave. As described, I was able to figure out that the available cpu and memeory on both hosts are badly distributed. As an example, I'm able to spin an engine with a lot of vcpus but with little memory and vice versa.

I was just not aware that a session can't share resources across nodes.

 

Thank you very much!