Support Questions

Find answers, ask questions, and share your expertise

MapReduce 2 Optimization in Heterogeneous Cluster

avatar
New Contributor

I have this configuration:

    Hadoop: v2 (Yarn)
    An input file: Size = 100 GB.
    3 Slaves: each has 4 VCORES with Speed = 2 GHz and RAM = 8 GB
    5 Slaves: each has 2 VCORES with Speed = 1 GHz and RAM = 2 GB
    MapReduce program: WordCount

How can I minimize WordCount execution time by assigning small input splits to the 5 slower slaves and big input splits to the 3 fastest slaves?

1 ACCEPTED SOLUTION

avatar
Super Collaborator

A vcore is a virtual core. You can define it however you want.

You could, as an example, define that a vcore is the processing power that is delivered by a 1GHz thread core. A 3GHz core would than be comparable to 3 vcores in the node manager. Your container request then needs to use multiple vcores which handles the difference in speed.

 

Not a lot of clusters do this due to the administrative overhead and the fact that if the end users do not use the vcore correctly it can overload the faster machines.

 

Wilfred

View solution in original post

3 REPLIES 3

avatar
Super Collaborator

You need to setup the nodes with the proper vcores and memory available for the NM. That should solve the problem. It will put more load on the larger nodes than on the small nodes.

The container is also scheduled on the node based on the data locality which is out of your control.

 

You can however not say start processing of the split on a specific node.

 

Wilfred

avatar
New Contributor

I wonder why YARN doesn't support VCORE SPEED in container configuration!

avatar
Super Collaborator

A vcore is a virtual core. You can define it however you want.

You could, as an example, define that a vcore is the processing power that is delivered by a 1GHz thread core. A 3GHz core would than be comparable to 3 vcores in the node manager. Your container request then needs to use multiple vcores which handles the difference in speed.

 

Not a lot of clusters do this due to the administrative overhead and the fact that if the end users do not use the vcore correctly it can overload the faster machines.

 

Wilfred