We are trying to configure new server. Is there a formula that can be used to determine optimum memory required for JVM containers? We are using hdfs, yarn, solr, mapreduce...
@sbd4q0 This doc can help you to understand the heap requirements.
Also generally we consider that 1Million Blocks=1G Heap size needed.
The calculation to determine Hadoop memory over-commit per Host is as follows:
commit = available_memory_for_hadoop - total_hadoop_java_heap - impala_memory