Member since
09-28-2015
60
Posts
35
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
981 | 06-12-2018 08:36 PM | |
983 | 12-10-2017 07:17 PM | |
4900 | 10-27-2017 06:36 AM | |
2202 | 10-25-2017 06:39 PM | |
1148 | 10-02-2017 11:54 PM |
06-12-2018
08:36 PM
@Dhiraj Yes. That should be fine.
... View more
12-10-2017
07:17 PM
1 Kudo
@Gaurav Parmar If you are asking about the numbers : 1324256400 (Monday, December 19, 2011 1:00:00 AM) and 1324303200 (GMT: Monday, December 19, 2011 2:00:00 PM), they are the epoch timestamp. I am not sure about your use case on how/when are you going to supply the timestamp. But, this is one reference to convert human readable dates and time to timestamps and vice versa. https://www.epochconverter.com/
... View more
11-07-2017
12:38 AM
Thanks. Glad to know that it helped.
... View more
11-07-2017
12:33 AM
4 Kudos
@vrathod This will give top level view of stacks available. http://<host_ip>:8080/api/v1/stacks/ For HDP stack versions: http://<host_ip>:8080/api/v1/stacks/HDP Hope this helps.
... View more
10-27-2017
06:36 AM
1 Kudo
@Saravanan Ramaraj I assume the question is around the YARN total memory. This is because Ambari uses the smallest capacity node to bring with the calculations, as Ambari expects a homogenous cluster. But in this case, we have heterogenous cluster as : 1 master 4 CPU,16 GB RAM + 1 data node 8 CPU,30 GB RAM - Thus, Ambari picks the 16 GB one and assumes 2nd one to be of same size and does the calculation for YARN's Node Manager (NM) memory. I assume that both nodes have Node Manager running. - I believe that you would have 11 GB as value for YARN/yarn.nodemanager.resource.memory-mb. Thus, we have 22 GB (11 * 2) available in this case which is > 16 GB. 16*2 = 32 GB, but Ambari takes out memory required to run other processes outside the YARN workspace (eg: RM, HBase etc). Thus we have memory less than 32 GB available (which is expected). Its a good idea to have homogeneous clusters. =================================================================== However, you can make use of Config Groups here in Ambari based on different hardware profiles. You can creates 2 Config Groups (CG) where each CG has one node. By default, there would be a default CG as seen on YARN configs page having both the nodes. How to create a CG is exemplified using HBase here : https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/_using_host_config_groups.html I did the following testing in order to reduce the memory for one node. You can similarly bump up up the memory for the 30 GB node. - Starting with 2 node cluster, where Ambari had given 12 GB to each NM, with total capacity being 24 GB. - Created a CG named 'New' and added 2nd node to it. Then changed the YARN/yarn.nodemanager.resource.memory-mb for 2nd node under 'New' from ~ 12 GB to ~8 GB. - State of Node 1 under 'default' CG: - Restarted "Affected components" as prompted by Ambari after the above changes. - The Total memory changes from 24 GB to 20 GB now. Hope this helps.
... View more
10-25-2017
07:31 PM
I belive you need to figure why multiple Spark apps are running. If this is not a production cluster, and no one is going to get affected out of restarting SPARK, you can look into that option. But this just makes me to believe that the configuration settings for SPARK on how many SPARK apps are supposed to run is most probably the difference between two of your clusters. I am not an expert in SPARK to point you to the correct config to look for.
... View more
10-25-2017
06:39 PM
@uri ben-ari You can check it from YARN Resource Manager UI (RM UI). From Ambari YARN page, open RM UI From RM UI, you can have a look at the application which are running under YARN. From there you can look into memory consumption by each application, and compare your clusters for discrepancy. RM UI showing list of apps (with Allocated Memory). You can click on a specific app to have a detailed look on queue and memory used.
... View more
10-24-2017
07:55 PM
9 Kudos
If the cluster has only one queue at root level named 'default' and is consuming 100% of the capacity, Ambari will create a queue named 'llap' when HSI is enabled for the 1st time, which is set to (depends on which value is smaller) either : - the minimum required %age for LLAP to work, or - at 20% of cluster's capacity. -------------------------------------------------------------------------------------------------------------------------------------------------------------------- If this is not the case, where there are more than one queue in cluster, user will have to create/set the queue capacity %age in order to be used for LLAP app. Starting with minimum required for queue capacity (shown below), one can increase the queue %age size in order to add up the LLAP nodes in the cluster, as queue size is one of the primary drivers of how many Node Managers nodes will be running LLAP. Reference code for calculating minimum queue size. Following calculations can be a good reference in order to calculate the minimum queue capacity %age to be set by using the following config values as referenced from Ambari UI : - Total Node Manager nodes in Ambari cluster (NMCount). Can be got from Ambari's YARN page. - YARN Node Manager Size (YarnNMSize) (yarn-site/yarn.nodemanager.resource.memory-mb) - YARN minimum container size (YarnMinContSize) (yarn-site/yarn.scheduler.minimum-allocation-mb) - Slider AM container size (SliderAmSize) (hive-interactive-env/slider_am_container_mb). It is calculated as shown here. - Hive Tez Container Size (HiveTezContSize) (hive-interactive-site/hive.tez.container.size) - Tez AM container size (TezAmContSize) (tez-interactive-site/tez.am.resource.memory.mb) NormalizeUp() function is used to normalize the 1st parameter w.r.t. 2nd parameter (YarnMinContSize). Code reference is here, where the snippet function can be used for calculating by putting in a python file and called with correct params, or doing a manual calculation. Min. Total capacity required for queue to run LLAP (MinCapForLlapQueue) =
NormalizeUp(SliderAmSize, YarnMinContSize) +
NormalizeUp(HiveTezContSize, YarnMinContSize) +
NormalizeUp(TezAmContSize, YarnMinContSize)
Total Cluster Capacity (ClusterCap) = NMCount * YarnNMSize
Min. Queue Percentage Required for queue used for LLAP (in %) (MinQueuePerc) = MinCapForLlapQueue * 100 / ClusterCap
Thus, 'MinQueuePerc' value can used to set the queue size to be used for LLAP app. The queue %age can be changed from Ambari > Views > YARN Queue Manager.
... View more
Labels:
10-13-2017
06:33 PM
@azhar shaikh Timeout is coming from the finite timeout that Ambari has put for Service Check python scripts in order to bail out, rather than running forever. The point to note here is that there may be a problem in terms of HBase health in general, which is either making HBase service check to not finish within 300 secs (performance) or HBase process is not responding at all. Can you check the logs for HBase and the the services it depends on to verify their workable state ? CC @Chinmay Das
... View more
10-02-2017
11:54 PM
1 Kudo
@Johnny Fugers They are part of Hortonworks Data Platform (HDP) and is 100% open source under Apache license. In order to have support for these products for your enterprise, you can start from this link to explore the pricing for support and professional services. https://hortonworks.com/services/support/enterprise/ Phone contact : 1.408.675.0983
... View more