Support Questions

Find answers, ask questions, and share your expertise

how to increase the ZNode size in ambari cluster

avatar

hi all

when we upload conf file to zookeper we get the error - the .....xml weights 1095kb , which is more than zookeeper can have by default (1024kb)

so we decided to increase the ZNode size ( the default is limited to 1MB )

according to the article - https://community.hortonworks.com/content/supportkb/150660/error-javaioioexception-len-error-in-zook...

we set the following ( in order to support the 1095Kb size )

export JAVA_OPTS="-Djute.maxbuffer=11000000"

in zookeeper-env template , ( ambari zookeeper CONFIG )

is this correct actions ?

Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Michael Bronson

Sorry to get back late there is no real test.
But you could retry the same manipulation by
upload conf file to zookeper you shouldn't get the same error

View solution in original post

8 REPLIES 8

avatar
Master Mentor

@Michael Bronson

Zookeeper is not designed as a large data store to hold very large data values. As such this 1MB value is a default config option and can be overridden.

It is NOT advised to do so but increasing the size a little bit will probably not damage your system it all depends on your unique access patterns and these changes should be made with care and at your own risk!

The parameter to change is CAUTION as reiterated above

-Djute.maxbuffer=<bytes>

Please revert

HTH

avatar

thank you Geoffrey , as I mentioned we want to increase only 0.1M to this value , as you said this isn't much , and we hope this will cause any issues

second

can you approve my syntax in zookeeper-env template in ambari?

or maybe you have some remarks ?

export JAVA_OPTS="-Djute.maxbuffer=11000000"
Michael-Bronson

avatar
Master Mentor

@Michael Bronson

That looks good. Caution the system property must be set on all servers and clients otherwise problems will arise. This is really a sanity check.

export JAVA_OPTS="-Djute.maxbuffer=11000000"

ZooKeeper you will run into issues as soon as one of your clients (eg Solr) wants to do something with that file. Thus, you need to set jute.maxbuffer for your clients as well.

avatar

Geoffrey as I understand if I set the variable - export JAVA_OPTS="-Djute.maxbuffer=11000000"

In ambari GUI ( as --> zookeper --> CONFIG --> zookeeper-env template )

then click on save and restart the service

then it will be affected all zookeeper servers - am I right ?

so no need to set it separate on each server,,

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

AFAIK that should work as Ambari manages all the cluster config, but as usual, you will have to validate always setting global params 🙂

avatar

how to validate that ? , I mean after I set in ambari "export JAVA_OPTS="-Djute.maxbuffer=11000000"

and restart the zoo service then how to verify this seting is affected ?

any way from some CLI to check that Djute.maxbuffer=11000000 ?

we try:

 ps -ef | grep maxbuffer

but no output,

any suggestion how to validate the settings?

Michael-Bronson

avatar

I also found that, so what is the right variable to set ? ( JAVA_OPTS or JVMFLAGS )

export JVMFLAGS="-Djute.maxbuffer=11000000"
Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Sorry to get back late there is no real test.
But you could retry the same manipulation by
upload conf file to zookeper you shouldn't get the same error