- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How Ambari handle the changes in node configuration?
- Labels:
-
Apache Ambari
-
Apache Hadoop
Created ‎06-25-2018 03:48 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are running Hadoop cluster in VM's, and planning to add more cores and memory to these VM boxes. In this case how Ambari tune the memory and other parameters in YARN, MARPREDUCE,HIVE, SPARK etc? Will it do automatically ? or is there any script that need to be run?
Created ‎06-26-2018 01:47 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ambari agent finds out all the host specific informations like diskspace/ memory(RAM)/ CPU and sends it to ambari server as part of it's registration request.
If the cluster is already created and components/services are already installed then ambari can simply show the recommendations while making any configuration changes while ambari UI.
You can refer to the Ambari Stack Advisory script: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py
.
If you want to know more about it then there are few options available to determine the config requirements like:
- SmartSense - The memory-related rules are updated very frequently, at least once a quarter and have the most context as it has all of the SmartSense diagnostic information at its disposal and can take into account actual use versus configured use of services (cores, spindles, memory, other services running on that machine, other 3rd party utilities being run on that machine)
- Stack Advisor - Updated frequently, but tied to Ambari releases so depends on if the customer is using Ambari and if they are which specific version of Ambari and how up to date it is 1.7 vs 2.0 vs 2.1, etc..
- HDP Configuration Utility - Most basic and least frequently updated, but if the customer does not have Ambari or SmartSense and is manually deploying HDP is better than nothing.
.
https://community.hortonworks.com/questions/141855/stack-advisor-how-to-use-it.html
Created ‎06-26-2018 01:47 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ambari agent finds out all the host specific informations like diskspace/ memory(RAM)/ CPU and sends it to ambari server as part of it's registration request.
If the cluster is already created and components/services are already installed then ambari can simply show the recommendations while making any configuration changes while ambari UI.
You can refer to the Ambari Stack Advisory script: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py
.
If you want to know more about it then there are few options available to determine the config requirements like:
- SmartSense - The memory-related rules are updated very frequently, at least once a quarter and have the most context as it has all of the SmartSense diagnostic information at its disposal and can take into account actual use versus configured use of services (cores, spindles, memory, other services running on that machine, other 3rd party utilities being run on that machine)
- Stack Advisor - Updated frequently, but tied to Ambari releases so depends on if the customer is using Ambari and if they are which specific version of Ambari and how up to date it is 1.7 vs 2.0 vs 2.1, etc..
- HDP Configuration Utility - Most basic and least frequently updated, but if the customer does not have Ambari or SmartSense and is manually deploying HDP is better than nothing.
.
https://community.hortonworks.com/questions/141855/stack-advisor-how-to-use-it.html
Created ‎06-26-2018 11:31 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Exactly what I am looking for. thank you
