Support Questions

Find answers, ask questions, and share your expertise

Docker quickstart : resource issues

avatar
Guru

Hello,

I fetched the latest docker image for the quickstart single-node 'cluster', to start playing around on my host laptop having 16GB RAM and 8 CPU cores.

After starting the image via

sudo docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888 -p 7180 cloudera/quickstart /bin/bash

I got a command line.

From there I started CM (after having started mysqld) via

/home/cloudera/cloudera-manager --express 

and logged into it.

Starting up HDFS didn't work because of error "Out of memory" for the Namenode. Snippet from /var/run/cloudera-scm-agent/process/16-hdfs-NAMENODE/hs_err_pid16341.log

#  Out of Memory Error (workgroup.cpp:96), pid=16341, tid=140217219892992

Then I stopped the ClouderaManagementServices and just started the "ServiceMonitor" followed by the Namenode, which then went up fine.

After that I wanted to start "HostMonitor" from the Management Services, which again failed with "Out of system resources", but the container is showing:

Cpu(s):  4.4%us,  0.3%sy,  0.0%ni, 94.9%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16313048k total,  8682216k used,  7630832k free,   115504k buffers
Swap:    16380k total,        0k used,    16380k free,  2847244k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                        
10432 cloudera  20   0 4834m 483m  30m S 23.3  3.0   7:20.70 java                                                                                           
  960 root      20   0 2247m  53m 5836 S  0.3  0.3   0:21.10 cmf-agent                                                                                      

Hence, there should be enough free resource to start the "HostMonitor" ?!?!

 

Shouldn't the services in the container run smoothly without any resource issues by having 16GB and enough Cores on the host or do I miss something here ?!?!

Any help for solving this resource issue highly appreciated 😄

 

Thanks and regards

 

 

1 ACCEPTED SOLUTION

avatar
Expert Contributor
Hm, not sure. When it comes to Docker, the single-image Cloudera QuickStart has been replaced by the clusterdock framework-based CDH deployment, so maybe it's worth giving that a shot? It's always worth being mindful of resource needs for distributed systems; ultimately, we're trying to get a laptop to do the work normally expected of multiple servers. 🙂

View solution in original post

4 REPLIES 4

avatar
Expert Contributor
Are you on a Mac? How did you determine the free RAM in the container?

avatar
Guru

Hi @dspivak ,

thanks for answering and sorry for the delay.

I am running Fedora on the host and I just printed the memory consumption of the overall "situation".

 

Are there any tweaks which needs to be done to run CM+Hadoop services within that container, or where does this "out of memory" come from? Any container limits to modify, but where ?!?!

 

Thanks in advance...

avatar
Expert Contributor
Hm, not sure. When it comes to Docker, the single-image Cloudera QuickStart has been replaced by the clusterdock framework-based CDH deployment, so maybe it's worth giving that a shot? It's always worth being mindful of resource needs for distributed systems; ultimately, we're trying to get a laptop to do the work normally expected of multiple servers. 🙂

avatar
Guru

Many thanks, will switch to clusterdock.....looks very interesting btw 😉