Support Questions

Find answers, ask questions, and share your expertise

How Ambari handle the changes in node configuration?

avatar

We are running Hadoop cluster in VM's, and planning to add more cores and memory to these VM boxes. In this case how Ambari tune the memory and other parameters in YARN, MARPREDUCE,HIVE, SPARK etc? Will it do automatically ? or is there any script that need to be run?

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Gerg Git

Ambari agent finds out all the host specific informations like diskspace/ memory(RAM)/ CPU and sends it to ambari server as part of it's registration request.

If the cluster is already created and components/services are already installed then ambari can simply show the recommendations while making any configuration changes while ambari UI.

You can refer to the Ambari Stack Advisory script: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py

.

If you want to know more about it then there are few options available to determine the config requirements like:

  1. SmartSense - The memory-related rules are updated very frequently, at least once a quarter and have the most context as it has all of the SmartSense diagnostic information at its disposal and can take into account actual use versus configured use of services (cores, spindles, memory, other services running on that machine, other 3rd party utilities being run on that machine)
  2. Stack Advisor - Updated frequently, but tied to Ambari releases so depends on if the customer is using Ambari and if they are which specific version of Ambari and how up to date it is 1.7 vs 2.0 vs 2.1, etc..
  3. HDP Configuration Utility - Most basic and least frequently updated, but if the customer does not have Ambari or SmartSense and is manually deploying HDP is better than nothing.

.

- https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_command-line-installation/content/determ...

https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineSta...

https://community.hortonworks.com/questions/141855/stack-advisor-how-to-use-it.html

View solution in original post

2 REPLIES 2

avatar
Master Mentor

@Gerg Git

Ambari agent finds out all the host specific informations like diskspace/ memory(RAM)/ CPU and sends it to ambari server as part of it's registration request.

If the cluster is already created and components/services are already installed then ambari can simply show the recommendations while making any configuration changes while ambari UI.

You can refer to the Ambari Stack Advisory script: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py

.

If you want to know more about it then there are few options available to determine the config requirements like:

  1. SmartSense - The memory-related rules are updated very frequently, at least once a quarter and have the most context as it has all of the SmartSense diagnostic information at its disposal and can take into account actual use versus configured use of services (cores, spindles, memory, other services running on that machine, other 3rd party utilities being run on that machine)
  2. Stack Advisor - Updated frequently, but tied to Ambari releases so depends on if the customer is using Ambari and if they are which specific version of Ambari and how up to date it is 1.7 vs 2.0 vs 2.1, etc..
  3. HDP Configuration Utility - Most basic and least frequently updated, but if the customer does not have Ambari or SmartSense and is manually deploying HDP is better than nothing.

.

- https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_command-line-installation/content/determ...

https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineSta...

https://community.hortonworks.com/questions/141855/stack-advisor-how-to-use-it.html

avatar

Exactly what I am looking for. thank you