Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How can we setup customized cgroups for Centos 7?

avatar
Expert Contributor

I came to know that HDP is not supporting cgroups for CentOS 7. We are running our cluster on Centos 7. How can we enable CPU scheduling and Isolation? What is the process of defining our own cgroups for centos 7?

1 ACCEPTED SOLUTION

avatar
Master Mentor
@Ram D

Ram, You are right on the support. I suggest to wait until the official support before running into production.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_yarn_resource_mgt/content/enabling_cgroup...

Hint:

On RHEL7, the cpu and cpuacct controllers are managed together by default. The default directory is/sys/fs/cgroup/cpu,cpuacct. Due to the parsing logic of the container-executor, the presence of the comma in this path may lead to failures when initializing the NodeManager (when using the LinuxContainerExecutor).

To avoid this issue, create your own directory (such as /sys/fs/cgroup/hadoop/cpu) and set the yarn.nodemanager.linux-container-executor.cgroups.mount property to true. This will allow the NodeManager to mount the cpu controller, and YARN will be able to enforce CPU limits.

If you would like to mount CGroups yourself, you should set the yarn.nodemanager.linux-container-executor.cgroups.mount property to false and ensure that the hierarchy specified in the yarn.nodemanager.linux-container-executor.cgroups.hierarchy property exists in the mount location. You must also ensure that there are no commas anywhere in the path names.

View solution in original post

4 REPLIES 4

avatar
Master Mentor
@Ram D

Ram, You are right on the support. I suggest to wait until the official support before running into production.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_yarn_resource_mgt/content/enabling_cgroup...

Hint:

On RHEL7, the cpu and cpuacct controllers are managed together by default. The default directory is/sys/fs/cgroup/cpu,cpuacct. Due to the parsing logic of the container-executor, the presence of the comma in this path may lead to failures when initializing the NodeManager (when using the LinuxContainerExecutor).

To avoid this issue, create your own directory (such as /sys/fs/cgroup/hadoop/cpu) and set the yarn.nodemanager.linux-container-executor.cgroups.mount property to true. This will allow the NodeManager to mount the cpu controller, and YARN will be able to enforce CPU limits.

If you would like to mount CGroups yourself, you should set the yarn.nodemanager.linux-container-executor.cgroups.mount property to false and ensure that the hierarchy specified in the yarn.nodemanager.linux-container-executor.cgroups.hierarchy property exists in the mount location. You must also ensure that there are no commas anywhere in the path names.

avatar
Expert Contributor

Is there any need of cluster with kerberos security in this case?

avatar
Master Mentor

Yes afaik "CGroups require that the HDP cluster be Kerberos enabled."

Also, Kerberos is MUST in any case if we want to secure the cluster.

avatar
Master Mentor

@Ram D

See the following details. I would stick with official docs.