Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Using The Kafka-Topics Script In CDP7


I am trying to run what in my old cluster was a simple command

kafka-topics --bootstrap-server mybroker:9092 --list

Now with a CDP7 cluster, and Ranger installed, I get the following error. What essential thing am I missing here. Do I need a certain Ranger policy? Does this user have to be Kerberized? Is it something else? I am trying to reuse old management scripts to create my topics, but they all rely on getting the kafka-topics script to work. I have turned TLS off for the brokers.


21/01/10 13:47:49 INFO utils.Log4jControllerRegistration$: Registered kafka:type=kafka.Log4jController MBean
21/01/10 13:47:49 INFO admin.AdminClientConfig: AdminClientConfig values: 
	bootstrap.servers = [mybroker:9092]
	client.dns.lookup = default = = 300000 = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO = 30000
	receive.buffer.bytes = 65536 = 1000 = 50 = 120000
	retries = 5 = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000 = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS

21/01/10 13:47:49 INFO utils.AppInfoParser: Kafka version:
21/01/10 13:47:49 INFO utils.AppInfoParser: Kafka commitId: 79e841231b59b25d
21/01/10 13:47:49 INFO utils.AppInfoParser: Kafka startTimeMs: 1610275669932
21/01/10 13:47:52 INFO internals.AdminMetadataManager: [AdminClient clientId=adminclient-1] Metadata update failed
org.apache.kafka.common.errors.DisconnectException: Cancelled fetchMetadata request with correlation id 11 due to node -1 being disconnected
Error while executing topic command : org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
21/01/10 13:49:49 ERROR admin.TopicCommand$: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
	at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(
	at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(
	at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(
	at org.apache.kafka.common.internals.KafkaFutureImpl.get(
	at kafka.admin.TopicCommand$AdminClientTopicService.getTopics(TopicCommand.scala:313)
	at kafka.admin.TopicCommand$AdminClientTopicService.listTopics(TopicCommand.scala:249)
	at kafka.admin.TopicCommand$.main(TopicCommand.scala:65)
	at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.

21/01/10 13:49:49 INFO internals.AdminMetadataManager: [AdminClient clientId=adminclient-1] Metadata update failed
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.





I have done the following

  1. Created a new group that is visible to Ranger and added my user to it.
  2. Added that group to an existing Kafka-Ranger policy that has full permissions on the Kafka cluster
  3. Kinit my user when logged onto the Kafka broker.
  4. Run the script again.

I still get the same error message.


I eventually found the answer in this document.

The steps you need are

1: Create a jaas.conf file to describe how you will kerberise.

Either interactively with kinit

KafkaClient { required

or non-interactively with a keytab

KafkaClient { required

2: Create a client properties file to describe how you will authenticate

Either with TLS

ssl.truststore.location=<path to jks file>
ssl.truststore.password=<password for truststore>

Or without


3: Create the environment variable KAFKA_OPTS to to contain the JVM parameter

export KAFKA_OPTS="<path to jaas.conf>"


Then you can run the tool by referencing the Kafka brokers and the client config.

BOOTSTRAP=<kafka brokers URL>

kafka-topics --bootstrap-server $BOOTSTRAP --command-config --list


You will also need a Ranger policy that covers what you are trying to do.