- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How much Disk requirement is needed to deploy a HDP cluster?
Created ‎09-15-2016 07:08 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone!
I am a student of the University of the Basque Country and I am trying to deploy a small cluster with HDP. I was surfing across the installation guide of Hortonworks, but I didn't find the answer I want. My question is:
How much memory and disk space is needed to deploy a cluster of 2 initial nodes with Ambari and HDP? I have 25GB of disk and 4GB of memory per node? Is it enough?
Thank you so much!
Created ‎09-15-2016 07:39 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Please follow this document for requirements of HDP,
Created ‎09-15-2016 07:39 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Please follow this document for requirements of HDP,
Created ‎09-15-2016 07:49 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I saw that page, but there are only the minimum requirements for the installation of Ambari. Inthis case, if I follow the minimum specifications of installing Ambari, would I get the needed requirements for installing HDP correctly? Thank you for your quick answer!
Created ‎09-15-2016 07:57 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
add more space to Slave node and add memory to Master node. What services you want to install on hdp?
Created ‎09-15-2016 08:04 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The components I want to install are the next ones: AMBARI, ZEPPELIN, SQOOP, RANGER, YARN, HDFS, SPARK (Sparksql, Python, R). Thank you for your answer!
Created ‎09-15-2016 08:08 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Install all master components on master machine which has more than 10GB of memory and 50GB of Disk space. After that On slave node add more disk space depending on the usage requirement and 4GB memory will be sufficient for that.
Created ‎09-15-2016 08:08 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Install slave component on slave node as well.
Created ‎09-15-2016 08:23 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you so much for your help! I will try it that way!
Created ‎09-15-2016 09:48 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If this solved your question please accept the answer, it will closed this issue then.
