Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

what is the use of minimum and maximum container size

what is the use of minimum and maximum container size

Hi Team,

What is the use of minimum and maximum container size. if i put the my minimum container size is 2gb and maximum container size is 8 gb. in our cluster we run the different types of jobs. one job can consume the 5 gb memory, another will consume 8gb, this 2 jobs are run successfully because my max container size is 8 gb. in case if i run the jobs using the memory 10 gb, 15 gb, and 20gb . will it run successfully in my cluster using above min and mx container size.

Single job can use single container or multiple containers?

Can anyone of you help me on this.

1 REPLY 1
Highlighted

Re: what is the use of minimum and maximum container size

Expert Contributor

These are yarn parameters which controls the maximum and minimum conatiner sizes which yarn can allocate to containers:

YARN PARAMETERS:

----> yarn.scheduler.minimum-allocation-mb - The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this won't take effect, and the specified value will get allocated at minimum.

----> yarn.scheduler.maximum-allocation-mb - The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this won't take effect, and will get capped to this value.

MAPREDUCE PARAMETERS:

Client side parameters which job requests. We can override this.

mapreduce.map.memory.mb - Map container size

mapreduce.map.reduce.mb - Reducer container size

Note : If we request memory > yarn max allocation limit, Job will fail as yarn will report it can not allocate that much memory.

Below given are few examples:

----------------------------------------------------------------------------------------------------------------------------------------------------------------

Example: (Following will fail)

+=============================+

Server side:

yarn.scheduler.minimum-allocation-mb=1024

yarn.scheduler.maximum-allocation-mb=8196

Client size:

mapreduce.map.memory.mb=10240

----------------------------------------------------------------------------------------------------------------------------------------------------------------

Another example: (Following will work):

+=============================+

Server side:

yarn.scheduler.minimum-allocation-mb=1024

yarn.scheduler.maximum-allocation-mb=8196

Client size:

mapreduce.map.memory.mb=800

In this case mapper will get 1024 (Minimum conatiner size)

----------------------------------------------------------------------------------------------------------------------------------------------------------------

Another example: (Following will work):

+=============================+

Server side:

yarn.scheduler.minimum-allocation-mb=1024

yarn.scheduler.maximum-allocation-mb=8196

Client size:

mapreduce.map.memory.mb=1800

In this case mapper will get 2048

Note: Single job can use single/multiple containers depending upon size of input data, split size and nature of data.

Don't have an account?
Coming from Hortonworks? Activate your account here