- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How can we setup customized cgroups for Centos 7?
- Labels:
-
Apache YARN
Created ‎01-29-2016 06:47 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I came to know that HDP is not supporting cgroups for CentOS 7. We are running our cluster on Centos 7. How can we enable CPU scheduling and Isolation? What is the process of defining our own cgroups for centos 7?
Created ‎01-29-2016 06:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ram, You are right on the support. I suggest to wait until the official support before running into production.
Hint:
On RHEL7, the cpu
and cpuacct
controllers are managed together by default. The default directory is/sys/fs/cgroup/cpu,cpuacct
. Due to the parsing logic of the container-executor, the presence of the comma in this path may lead to failures when initializing the NodeManager (when using the LinuxContainerExecutor
).
To avoid this issue, create your own directory (such as /sys/fs/cgroup/hadoop/cpu
) and set the yarn.nodemanager.linux-container-executor.cgroups.mount
property to true
. This will allow the NodeManager to mount the cpu
controller, and YARN will be able to enforce CPU limits.
If you would like to mount CGroups yourself, you should set the yarn.nodemanager.linux-container-executor.cgroups.mount
property to false
and ensure that the hierarchy specified in the yarn.nodemanager.linux-container-executor.cgroups.hierarchy
property exists in the mount location. You must also ensure that there are no commas anywhere in the path names.
Created ‎01-29-2016 06:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ram, You are right on the support. I suggest to wait until the official support before running into production.
Hint:
On RHEL7, the cpu
and cpuacct
controllers are managed together by default. The default directory is/sys/fs/cgroup/cpu,cpuacct
. Due to the parsing logic of the container-executor, the presence of the comma in this path may lead to failures when initializing the NodeManager (when using the LinuxContainerExecutor
).
To avoid this issue, create your own directory (such as /sys/fs/cgroup/hadoop/cpu
) and set the yarn.nodemanager.linux-container-executor.cgroups.mount
property to true
. This will allow the NodeManager to mount the cpu
controller, and YARN will be able to enforce CPU limits.
If you would like to mount CGroups yourself, you should set the yarn.nodemanager.linux-container-executor.cgroups.mount
property to false
and ensure that the hierarchy specified in the yarn.nodemanager.linux-container-executor.cgroups.hierarchy
property exists in the mount location. You must also ensure that there are no commas anywhere in the path names.
Created ‎01-29-2016 07:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there any need of cluster with kerberos security in this case?
Created ‎01-29-2016 07:10 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes afaik "CGroups require that the HDP cluster be Kerberos enabled."
Also, Kerberos is MUST in any case if we want to secure the cluster.
Created ‎02-01-2016 02:39 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
See the following details. I would stick with official docs.
