07-13-2017 04:09 AM
I have 2 types of servers in my hadoop cluster, one with 64 G and the other with 128G, i created 2 templates and for the servers with 64 G i defined the container memory to 58G and for the stronger with 120 G.
I faced an issue that the tasks distributed on the nodes in a syemtric way that the servers with container memory of 58G reached 99% and alert on my system while the stronger still with 50% usage.
Appreciate any help.
07-13-2017 10:21 AM
07-13-2017 10:24 AM
07-13-2017 02:24 PM
Weird behaviour for the Resource Manager ...
It's expected behaviour that i need to increase my cluster by the time and by the time there is a stronger servers in the market that i want to get in the same price of the old, it's doesn't make sense to change all the cluster servers when i want stronger servers to be in the cluster, as the HDFS has the storage polict by DataNode where the largest node store more data and not round robin.
In the other hand it's doesn't make sense to redue the container memory to a lower level to avoid the monitoring and loose un used memory.
07-13-2017 02:43 PM
07-13-2017 02:51 PM
What i think now to increase the memory for the smaller servers as they have the same cores as the large ones.
Hope our system team can do that and to have such option.
07-20-2017 09:56 PM
For now, i reduced the memory container on the small servers from 58 G to 50 G and drop impala role from these nodes.
@mbigelow i'm planning to align all the hadoop servers in my clusters,
One of the issues i ran on also that one small server had a template of the larger servers which impacted the server memory.
Looking forward to add monitoring to catch such cases.