- last edited on
We are trying to configure new server. Is there a formula that can be used to determine optimum memory required for JVM containers? We are using hdfs, yarn, solr, mapreduce...
@sbd4q0 This doc can help you to understand the heap requirements.
Also generally we consider that 1Million Blocks=1G Heap size needed.
The calculation to determine Hadoop memory over-commit per Host is as follows:
commit = available_memory_for_hadoop - total_hadoop_java_heap - impala_memory if (total_system_memory * 0.8) < ( sum(java_heap_of_processes) * 1.3 + impala_memory) then flag over-committed