Created 06-25-2018 03:48 PM
We are running Hadoop cluster in VM's, and planning to add more cores and memory to these VM boxes. In this case how Ambari tune the memory and other parameters in YARN, MARPREDUCE,HIVE, SPARK etc? Will it do automatically ? or is there any script that need to be run?
Created 06-26-2018 01:47 AM
Ambari agent finds out all the host specific informations like diskspace/ memory(RAM)/ CPU and sends it to ambari server as part of it's registration request.
If the cluster is already created and components/services are already installed then ambari can simply show the recommendations while making any configuration changes while ambari UI.
You can refer to the Ambari Stack Advisory script: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py
.
If you want to know more about it then there are few options available to determine the config requirements like:
.
https://community.hortonworks.com/questions/141855/stack-advisor-how-to-use-it.html
Created 06-26-2018 01:47 AM
Ambari agent finds out all the host specific informations like diskspace/ memory(RAM)/ CPU and sends it to ambari server as part of it's registration request.
If the cluster is already created and components/services are already installed then ambari can simply show the recommendations while making any configuration changes while ambari UI.
You can refer to the Ambari Stack Advisory script: https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py
.
If you want to know more about it then there are few options available to determine the config requirements like:
.
https://community.hortonworks.com/questions/141855/stack-advisor-how-to-use-it.html
Created 06-26-2018 11:31 AM
Exactly what I am looking for. thank you