Support Questions

Find answers, ask questions, and share your expertise

Cloudera Manager warning about Max Message Size for Hive MetaStore

avatar
Contributor

After upgrading from CDH 5.7.5 to 5.10.0, Cloudera Manager gives a couple of warnings about "Max Message Size for Hive Metastore":

 

Hive: Max Message Size for Hive MetaStore
The value of Max Message Size for Hive MetaStore (104857600) should be at least 10% of the value of Java Heap Size of Hive Metastore Server in Bytes (858993459).

 

Hive Metastore Server (bsl-ib-c3): Max Message Size for Hive MetaStore

The value of Max Message Size for Hive MetaStore (104857600) should be at least 10% of the value of Java Heap Size of Hive Metastore Server in Bytes (858993459).

 

These are just warnings, but the default 100MB is over 12% of the value of Java Heap Size..., which should satisfy the criteria of being "at least 10%". Am I missing something obvious?

3 REPLIES 3

avatar

I got the same question after I upgraded CM from 5.8.3 to 5.10.

For my cluster, Java Heap Size of Hive Metastore Server is set to 8 GB, which is actually 8,598,323,200 bytes instead of what it displays in the warning. So I guess the warning is asking to increase the Max Message Size from 100MB (104,857,600 bytes) to 800MB.

 

Can someone verify that, as well as explain what message size actually is?

 

 

avatar
Expert Contributor

Thanks for asking about this. The max message size for Hive Metastore should be set to 10% of the Metastore server heap size, up to a maximum of 2,147,483,647 bytes.

 

Unfortunately, the values used or displayed by that configuration validator may be incorrect in some cases. Until that's fixed, I recommend checking the actual HMS heap size and configuring the max message size accordingly. 

avatar
Explorer

We're just upgrading from 5.7 to 5.12 and received the same warning. In our case Hive metastore heap size is 12G so the  max message size will be set to 1.2 G . That appears bit scary to me.

What are the risks of setting the value such high ? 

How can we handle large metastore updates in case that happens ? Do we need to prepare the metastore server i.e. mysql to do somethign differently ?

What happens if we don't follow this recommandation ?

 

Thanks,

Sunil