Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

what is the best practice for scale out the servers, what cpu. memory and disk space is recommended per each server

avatar
Expert Contributor

Hi,

Based on the posts in the community, looks like physical servers are recommended rather than VM cluster.

Now i would like to know what is the best practice for scale out the servers and what memory, cpu and disk is recommended .

Looks like it is also recommended to scale out (multiple machines) rather than scale up ( smaller number of large machines).

Can you please advise more details about the best practice for scale out the HDP and HDF servers?

Thanks,

SJ

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Sanaz Janbakhsh

It depends on various factors like the kind of components that you are planning to use, the kind of job and the amount of data that you are planning to process, size of cluster ...etc.

1. But in general you can refer to the following doc to get a good idea (Cluster Planning): http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/ch_hardware-reco...

2. Similarly for Slave/Master recommendations: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/hardware-for-sla...

3. Also you can get some idea about the Memory requirement for NameNode based on the number of files: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ref-809...

4. You can also use the HDP utility script which is the recommended method for calculating HDP memory configuration settings: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/determi...

View solution in original post

3 REPLIES 3

avatar
Master Mentor

@Sanaz Janbakhsh

It depends on various factors like the kind of components that you are planning to use, the kind of job and the amount of data that you are planning to process, size of cluster ...etc.

1. But in general you can refer to the following doc to get a good idea (Cluster Planning): http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/ch_hardware-reco...

2. Similarly for Slave/Master recommendations: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/hardware-for-sla...

3. Also you can get some idea about the Memory requirement for NameNode based on the number of files: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ref-809...

4. You can also use the HDP utility script which is the recommended method for calculating HDP memory configuration settings: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/determi...

avatar
Expert Contributor

Thanks Jay SenSharma,

How about HDF?

SJ.

avatar
Master Mentor