I fetched the latest docker image for the quickstart single-node 'cluster', to start playing around on my host laptop having 16GB RAM and 8 CPU cores.
After starting the image via
sudo docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888 -p 7180 cloudera/quickstart /bin/bash
I got a command line.
From there I started CM (after having started mysqld) via
and logged into it.
Starting up HDFS didn't work because of error "Out of memory" for the Namenode. Snippet from /var/run/cloudera-scm-agent/process/16-hdfs-NAMENODE/hs_err_pid16341.log
# Out of Memory Error (workgroup.cpp:96), pid=16341, tid=140217219892992
Then I stopped the ClouderaManagementServices and just started the "ServiceMonitor" followed by the Namenode, which then went up fine.
After that I wanted to start "HostMonitor" from the Management Services, which again failed with "Out of system resources", but the container is showing:
Cpu(s): 4.4%us, 0.3%sy, 0.0%ni, 94.9%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16313048k total, 8682216k used, 7630832k free, 115504k buffers Swap: 16380k total, 0k used, 16380k free, 2847244k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10432 cloudera 20 0 4834m 483m 30m S 23.3 3.0 7:20.70 java 960 root 20 0 2247m 53m 5836 S 0.3 0.3 0:21.10 cmf-agent
Hence, there should be enough free resource to start the "HostMonitor" ?!?!
Shouldn't the services in the container run smoothly without any resource issues by having 16GB and enough Cores on the host or do I miss something here ?!?!
Any help for solving this resource issue highly appreciated :D
Thanks and regards
Hi @dspivak ,
thanks for answering and sorry for the delay.
I am running Fedora on the host and I just printed the memory consumption of the overall "situation".
Are there any tweaks which needs to be done to run CM+Hadoop services within that container, or where does this "out of memory" come from? Any container limits to modify, but where ?!?!
Thanks in advance...