Member since
01-05-2018
9
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3080 | 02-05-2018 05:51 PM |
05-07-2019
05:24 AM
I removed CDK 4.0 and instead added CDK 3.1 (it gets kafka 1.0.1). Now, the consumer is getting the messages on console. There seems to be some issue with CDK 4.0 integration.
... View more
05-07-2019
12:28 AM
I am using CDH 5.13 and CDK 4.0 (apache 2.1). Getting the same problem. However, consumer is still not getting messages after deleting brokers and topics as suggested by you.
... View more
05-06-2019
11:45 PM
I recently setup a cloudera quickstartVM using docker image and setup Kafka parcel in it. After successful installation, i see that all the services are running in green status (including Kafka and zookeeper). However, when I follow the below commands of kafka CLI i don't see consumer getting messages.
Any help is greatly appreciated CDHv 5.13 CDK 4.0 (kafka 2.1)- through parcel.
kafka-topics --create quickstart.cloudera:9092 --replication-factor 1 --partitions 1 --topic test3 --zookeeper quickstart.cloudera:2181
topic created successfully on console.
Console Consumer (CLI terminal 1):
kafka-console-consumer --bootstrap-server quickstart.cloudera:9092 --topic test3
consumer started on console in Terminal 1
Console Producer:
kafka-console-producer --broker-list quickstart.cloudera:9092 --topic test3
Producer created in Terminal 2. Now, when i type anything in console of Terminal 2 (producer), the consumer terminal doesn't show anything.
Please suggest what is missing here. I need help to debug this situation.
kafka-topics --zookeeper quickstart.cloudera:2181 --describe --topic test4
19/05/07 09:18:28 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.17.0.2:60968, server: quickstart.cloudera/172.17.0.2:2181
19/05/07 09:18:28 INFO zookeeper.ClientCnxn: Session establishment complete on server quickstart.cloudera/172.17.0.2:2181, sessionid = 0x16a915bca140112, negotiated timeout = 30000
19/05/07 09:18:28 INFO zookeeper.ZooKeeperClient: [ZooKeeperClient] Connected.
Topic:test4 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test4 Partition: 0 Leader: 37 Replicas: 37 Isr: 37
19/05/07 09:18:28 INFO zookeeper.ZooKeeperClient: [ZooKeeperClient] Closing.
19/05/07 09:18:28 INFO zookeeper.ClientCnxn: EventThread shut down
19/05/07 09:18:28 INFO zookeeper.ZooKeeper: Session: 0x16a915bca140112 closed
19/05/07 09:18:28 INFO zookeeper.ZooKeeperClient: [ZooKeeperClient] Closed.
... View more
Labels:
02-05-2018
05:51 PM
Thanks for the reply. HDFS started in green after making the below changes. DataNode HTTP Web UI Port - 50075 Secure DataNode Web UI Port (TLS/SSL) - 50475 DataNode Transceiver Port - 50010 DataNode Data Transfer Protection - Authentication
... View more
02-03-2018
10:24 AM
Hi, I am setting up TLS/SSL and Kerberos on a single-user setup of Cloudera Manager. The cloudera Manager version used is 5.12 and the underlying CDH parcel is 5.11. Kerberors setup is done using MIT KDC and TLS/SSL is configured upto Level 1. After doing this, when I restart CM, Agents and HDFS I see that the HDFS doesn't restart. The error is as below: 5:49:39.498 PM FATAL DataNode Exception in secureMain
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP. Using privileged resources in combination with SASL RPC data transfer protection is not supported.
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1333)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1233)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:464)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2545)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2432)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2479)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2661)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2685) After searching for a probable solution on Google, I stumbled upon a link that asks to do additional configuration for single-user seutps. The section ' Configuration for Secure Clusters' talks about the additional 4 steps to be performed. https://www.cloudera.com/documentation/enterprise/5-11-x/topics/install_singleuser_reqts.html I have performed the steps of HDFS with TLS but not sure what to do for the remaining two : Do not configure the DataNode Transceiver port and HTTP Web UI port to use privileged ports. Configure DataNode data transfer protection. Please suggest what is the expectation for these 2 steps in single-user mode. Thanks
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos