Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

what is the best practice for scale out the servers, what cpu. memory and disk space is recommended per each server

Solved Go to solution

what is the best practice for scale out the servers, what cpu. memory and disk space is recommended per each server

Rising Star

Hi,

Based on the posts in the community, looks like physical servers are recommended rather than VM cluster.

Now i would like to know what is the best practice for scale out the servers and what memory, cpu and disk is recommended .

Looks like it is also recommended to scale out (multiple machines) rather than scale up ( smaller number of large machines).

Can you please advise more details about the best practice for scale out the HDP and HDF servers?

Thanks,

SJ

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: what is the best practice for scale out the servers, what cpu. memory and disk space is recommended per each server

Super Mentor

@Sanaz Janbakhsh

It depends on various factors like the kind of components that you are planning to use, the kind of job and the amount of data that you are planning to process, size of cluster ...etc.

1. But in general you can refer to the following doc to get a good idea (Cluster Planning): http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/ch_hardware-reco...

2. Similarly for Slave/Master recommendations: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/hardware-for-sla...

3. Also you can get some idea about the Memory requirement for NameNode based on the number of files: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ref-809...

4. You can also use the HDP utility script which is the recommended method for calculating HDP memory configuration settings: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/determi...

View solution in original post

3 REPLIES 3
Highlighted

Re: what is the best practice for scale out the servers, what cpu. memory and disk space is recommended per each server

Super Mentor

@Sanaz Janbakhsh

It depends on various factors like the kind of components that you are planning to use, the kind of job and the amount of data that you are planning to process, size of cluster ...etc.

1. But in general you can refer to the following doc to get a good idea (Cluster Planning): http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/ch_hardware-reco...

2. Similarly for Slave/Master recommendations: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_cluster-planning/content/hardware-for-sla...

3. Also you can get some idea about the Memory requirement for NameNode based on the number of files: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ref-809...

4. You can also use the HDP utility script which is the recommended method for calculating HDP memory configuration settings: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/determi...

View solution in original post

Highlighted

Re: what is the best practice for scale out the servers, what cpu. memory and disk space is recommended per each server

Rising Star

Thanks Jay SenSharma,

How about HDF?

SJ.

Highlighted

Re: what is the best practice for scale out the servers, what cpu. memory and disk space is recommended per each server

Super Mentor
Don't have an account?
Coming from Hortonworks? Activate your account here