Created 04-25-2017 05:49 PM
Do we need separated storage for Ambari Metrics System. how this will be configured?
can you please suggest how to start or configure this AMS?
Created 04-25-2017 06:11 PM
Essentially behind the scene, AMS is using Hbase for storing the information, so it can be embeded(default) and distributed deployed. To launch AMS, you need add AMS service through ambari. Checking on https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-user-guide/content/ch_using_ambar... for more details.
Created 04-25-2017 07:05 PM
thanks for your response, could you be more brief on Hbase configuration and how this will be configured with storage part? what are the use case and how much storage required for it?
Created 04-25-2017 09:22 PM
If just for embeded use, simply tell where your AMS data located by setting “hbase.rootdir" and "hbase.tmp.dir” directory configurations in Ambari Metrics > Configs > Advanced ams-hbase-site are using a large partition, such as: file:///grid/0/var/lib/ambari-metrics-collector/hbase .
If you choose to setup a distributed mode, then you need to set Ambari Metrics > Configs > General "Metrics Service operation mode" to distributed. In the Advanced ams-hbase-site, set up hbase.cluster.distributed to true, and hbase.rootdir to a hdfs location.
It really depends on your usage of cluster, the size of AMS may vary greatly. If you have a lot of activities on the cluster, you may choose to give more space to that folder. There is no definite formula to decide how much storage is required for different usage. But here is guideline for disk and memory setting I copied from the http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_ambari_reference_guide/content/_ams_gener...
Cluster Environment | Host Count | Disk Space | Collector Mode | TTL | Memory Settings |
---|---|---|---|---|---|
Single-Node Sandbox | 1 | 2GB | embedded | Reduce TTLs to 7 Days | metrics_collector_heap_size=1024 hbase_regionserver_heapsize=512 hbase_master_heapsize=512 hbase_master_xmn_size=128 |
PoC | 1-5 | 5GB | embedded | Reduce TTLs to 30 Days | metrics_collector_heap_size=1024 hbase_regionserver_heapsize=512 hbase_master_heapsize=512 hbase_master_xmn_size=128 |
Pre-Production | 5-20 | 20GB | embedded | Reduce TTLs to 3 Months | metrics_collector_heap_size=1024 hbase_regionserver_heapsize=1024 hbase_master_heapsize=512 hbase_master_xmn_size=128 |
Production | 20-50 | 50GB | embedded | n.a. | metrics_collector_heap_size=1024 hbase_regionserver_heapsize=1024 hbase_master_heapsize=512 hbase_master_xmn_size=128 |
Production | 50-200 | 100GB | embedded | n.a. | metrics_collector_heap_size=2048 hbase_regionserver_heapsize=2048 hbase_master_heapsize=2048 hbase_master_xmn_size=256 |
Production | 200-400 | 200GB | embedded | n.a. | metrics_collector_heap_size=2048 hbase_regionserver_heapsize=2048 hbase_master_heapsize=2048 hbase_master_xmn_size=512 |
Production | 400-800 | 200GB | distributed | n.a. | metrics_collector_heap_size=8192 hbase_regionserver_heapsize=122288 hbase_master_heapsize=1024 hbase_master_xmn_size=1024 regionserver_xmn_size=1024 |
Production | 800+ | 500GB | distributed | n.a. | metrics_collector_heap_size=12288 hbase_regionserver_heapsize=16384 hbase_master_heapsize=16384 hbase_master_xmn_size=2048 regionserver_xmn_size=1024 |