Support Questions
Find answers, ask questions, and share your expertise

Zookeeper Performance and Metrics - When to resize?

Highlighted

Zookeeper Performance and Metrics - When to resize?

Contributor

What metrics should we be monitoring to know when Zookeeper's performance is overloaded? How do you know when it's time to resize or add new nodes or a new ZK cluster?

I have heard to look at the negotiated timeouts in the Zookeeper logs, but what is this really telling me?

We are currently running a 3 node HA cluster with 3 ZK servers installed on m4.2xlarge boxes. With HDP 2.4.2, a lot more services rely on ZK for HA capabilities. We want to make sure we're not overloading ZK with all of these services (NN, Hive, Storm, Kafka, Solr, YARN, etc...)

2 REPLIES 2
Highlighted

Re: Zookeeper Performance and Metrics - When to resize?

Contributor
Highlighted

Re: Zookeeper Performance and Metrics - When to resize?

@Jon Masts

Zookeeper scaling is a balance between read throughput while maintaining write performance. As you add ZK servers, write performance goes down but client connections go up. You will want to have an odd number of ZK servers for quorum purposes. Here is a graph from the Zookeeper wiki that shows the performance of a ZK cluster with different numbers of servers across varying % reads:

6937-zkperfrw-32.jpg

Reference: ZooKeeper Throughput as the Read-Write Ratio Varies

As you can see, more servers is not necessarily better. You get better performance from 5 ZK servers until you get over about 90% reads, then 7 servers shows a little better performance.

Basically, what this shows you is that 5 ZK servers will serve you well in most scenarios. For production clusters, you will want to have 5 ZK servers for fault tolerance, too, so I'd recommend just going with 5 ZK servers from the start.