Support Questions

Find answers, ask questions, and share your expertise

Suggestionstion on creating AMS- Ambari Metrics System

Do we need separated storage for Ambari Metrics System. how this will be configured?

can you please suggest how to start or configure this AMS?

3 REPLIES 3

Expert Contributor

Essentially behind the scene, AMS is using Hbase for storing the information, so it can be embeded(default) and distributed deployed. To launch AMS, you need add AMS service through ambari. Checking on https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-user-guide/content/ch_using_ambar... for more details.

thanks for your response, could you be more brief on Hbase configuration and how this will be configured with storage part? what are the use case and how much storage required for it?

Expert Contributor

If just for embeded use, simply tell where your AMS data located by setting “hbase.rootdir" and "hbase.tmp.dir” directory configurations in Ambari Metrics > Configs > Advanced ams-hbase-site are using a large partition, such as: file:///grid/0/var/lib/ambari-metrics-collector/hbase .

If you choose to setup a distributed mode, then you need to set Ambari Metrics > Configs > General "Metrics Service operation mode" to distributed. In the Advanced ams-hbase-site, set up hbase.cluster.distributed to true, and hbase.rootdir to a hdfs location.

It really depends on your usage of cluster, the size of AMS may vary greatly. If you have a lot of activities on the cluster, you may choose to give more space to that folder. There is no definite formula to decide how much storage is required for different usage. But here is guideline for disk and memory setting I copied from the http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_ambari_reference_guide/content/_ams_gener...

Cluster EnvironmentHost CountDisk SpaceCollector ModeTTLMemory Settings
Single-Node Sandbox12GBembeddedReduce TTLs to 7 Daysmetrics_collector_heap_size=1024

hbase_regionserver_heapsize=512

hbase_master_heapsize=512

hbase_master_xmn_size=128

PoC1-55GBembeddedReduce TTLs to 30 Daysmetrics_collector_heap_size=1024

hbase_regionserver_heapsize=512

hbase_master_heapsize=512

hbase_master_xmn_size=128

Pre-Production5-2020GBembeddedReduce TTLs to 3 Monthsmetrics_collector_heap_size=1024

hbase_regionserver_heapsize=1024

hbase_master_heapsize=512

hbase_master_xmn_size=128

Production20-5050GBembeddedn.a.metrics_collector_heap_size=1024

hbase_regionserver_heapsize=1024

hbase_master_heapsize=512

hbase_master_xmn_size=128

Production50-200100GBembeddedn.a.metrics_collector_heap_size=2048

hbase_regionserver_heapsize=2048

hbase_master_heapsize=2048

hbase_master_xmn_size=256

Production200-400200GBembeddedn.a.metrics_collector_heap_size=2048

hbase_regionserver_heapsize=2048

hbase_master_heapsize=2048

hbase_master_xmn_size=512

Production400-800200GBdistributedn.a.metrics_collector_heap_size=8192

hbase_regionserver_heapsize=122288

hbase_master_heapsize=1024

hbase_master_xmn_size=1024

regionserver_xmn_size=1024

Production800+500GBdistributedn.a.metrics_collector_heap_size=12288

hbase_regionserver_heapsize=16384

hbase_master_heapsize=16384

hbase_master_xmn_size=2048

regionserver_xmn_size=1024