Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Ambari Blueprint : Sizing Components memory based on Host Environment

Solved Go to solution
Highlighted

Ambari Blueprint : Sizing Components memory based on Host Environment

I am working Chef blueprint deployment where in blueprint json memory values gets adjusted based on the system memory available. the target system system memory can be from 8 GB to 145GB.

HDP will be used for running OpenSOC.(Storm,Kafka,HBase,HDFS)

For Example Kafka,KAFKA Heap > 5GB would not give any benefit whatever the source memory is.

So looking for automic sizing of the componetns(Hbase,Storm,AMS,HDFS) where more memory would help the system to perform better .

This is for HDP 2.2.8 and Ambari 2.1.2.

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Ambari Blueprint : Sizing Components memory based on Host Environment

Ambari Blueprints does not (as of 2.1.2) use the StackAdvisor to provide recommended configurations. It is planned for Ambari 2.1.3 (AMBARI-13487).

But you can use the recommendation engine yourself.

The following script uses the Recommendation API (/api/v1/stacks/HDP/versions/<ver>/recommendations):

  • https://github.com/seanorama/ambari-bootstrap/tree/master/deploy
  • The script generates & deploys a Blueprint, including StackAdvisor recommendations.
  • Alternatively use it to get the recommendations only (set 'export deploy=false' to just generate the configurations)
  • Or borrow the API calls to integrate into your own deployment method.

p.s. a few projects have borrowed this code to generate configurations throughout the life of a cluster.

p.s.s. these scripts are used by Google's bdutil so have been heavily field tested.

View solution in original post

4 REPLIES 4
Highlighted

Re: Ambari Blueprint : Sizing Components memory based on Host Environment

Explorer

@jramakrishnan@hortonworks.com - this is similar to the question you asked earlier. The current incarnation of blueprints is about faithfully replicating a known configuration into a new cluster. It was not intended to dynamically configure the new cluster based on its available resources.

But, as mentioned by @smohanty@hortonworks.com and @rnettleton@hortonworks.com, the next release of Ambari, Ambari-2.1.3, will incorporate an option to use Stack Advisor. That option will do some dynamic over-rides of the blueprint to tune and optimize the configuration for the target cluster.

Highlighted

Re: Ambari Blueprint : Sizing Components memory based on Host Environment

Thanks @David Schorow. I am looking for some guidance on the current release. As we are doing it right now.

Highlighted

Re: Ambari Blueprint : Sizing Components memory based on Host Environment

Ambari Blueprints does not (as of 2.1.2) use the StackAdvisor to provide recommended configurations. It is planned for Ambari 2.1.3 (AMBARI-13487).

But you can use the recommendation engine yourself.

The following script uses the Recommendation API (/api/v1/stacks/HDP/versions/<ver>/recommendations):

  • https://github.com/seanorama/ambari-bootstrap/tree/master/deploy
  • The script generates & deploys a Blueprint, including StackAdvisor recommendations.
  • Alternatively use it to get the recommendations only (set 'export deploy=false' to just generate the configurations)
  • Or borrow the API calls to integrate into your own deployment method.

p.s. a few projects have borrowed this code to generate configurations throughout the life of a cluster.

p.s.s. these scripts are used by Google's bdutil so have been heavily field tested.

View solution in original post

Highlighted

Re: Ambari Blueprint : Sizing Components memory based on Host Environment

@jramakrishnan@hortonworks.com - I recall @jplayer@hortonworks.com @nfakhar@hortonworks.com adapted the code for diverse hardware on AWS.

Don't have an account?
Coming from Hortonworks? Activate your account here