Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hbase memory overhead

hbase memory overhead

New Contributor

why cdh components needs additional 30% memory overhead??

 overhead.png

 

1 REPLY 1

Re: hbase memory overhead

Contributor

The heap size is reserved for data, while the extra overhead is reserved for data structures related to the service. Spark for example has a default memory overhead of 40% for spark related data structures when allocating executor memory.