Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Kafka setup Multibroker on multiple servers AWS EC2

New Contributor

Hi

I was trying to setup my kafka cluster on multiple servers as brokers. I'm not really sure what should be the configurations for this scenario. I checked online and on official Apache website but it only has the setup for multibroker within the same server but not that I wanted to. Could anyone please help me with this. It'd be highly appreciated.

My current setup:

server.properties:(Have the same setting on all my nodes with different broker.id and respective host.name)

broker.id=0

host.name=master

delete.topic.enable=true

log.dirs=/home/ec2-user/data/kafka/kafka-logs

zookeeper.connect=master:2181,slave:2181,slave1:2181,slave2:2181

zookeeper.properties:

dataDir=/home/ec2-user/data/zookeeper

clientPort=2181

maxClientCnxns=0

Thanks

1 REPLY 1

Super Collaborator

Confluent, the support company of Kafka have this documentation

https://docs.confluent.io/current/kafka/deployment.html#multi-node-configuration

  1. Each Broker must connect to the same ZooKeeper ensemble at the same chroot via the zookeeper.connect configuration.
  2. Each Broker must have a unique value for broker.id set explicitly in the configuration OR broker.id.generation.enable must be set to true.
  3. Each Broker must be able to communicate with each other broker directly via one of the methods specified in the listeners or advertised.listeners configuration

Word of advice: Don't store Zookeeper and Kafka data on the same volume. Also store the OS and process logs separate from the actual Zookeeper and Kafka data

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.