Created 10-22-2015 02:49 PM
Working on setting up installing blueprint to deploy the cluster via Chef. Is there any general recomendation to set the heap values based on the available system memory ?
say for example
say if the system memory is 100 GB. then KAFKA_HEAP_MEMORY is set to system_memory/20
Created 10-22-2015 02:54 PM
Not really. We are working on integrating StackAdvisor capability to blueprint based deployment in the next release. That will automatically modify the given blueprint to set correct defaults. Kafka team may have some recommendations.
Created 10-22-2015 02:54 PM
Not really. We are working on integrating StackAdvisor capability to blueprint based deployment in the next release. That will automatically modify the given blueprint to set correct defaults. Kafka team may have some recommendations.
Created 10-22-2015 08:31 PM
thanks @smohanty@hortonworks.com. Do we have the logic to be used atleast .I would like to use it . For Kafka i guess anything above 5GB is not of much benefit.I am mainly working on AMS,Storm,HBase and HDFS.
Created 10-27-2015 03:21 PM
As Sumit mentioned, we currently do not use the StackAdvisor output in Blueprints, but will support this in a future release.
If you have the hardware to experiment with, you can try deploying a cluster with the UI (which will cause the recommendations to be applied), and then export the Blueprint from the running cluster. You can then use the Kafka config in the exported Blueprint as a starting point.
The Blueprints wiki contains the REST call used for exporting a Blueprint:
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-APIResourcesandSyntax