Member since 
    
	
		
		
		10-04-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                113
            
            
                Posts
            
        
                11
            
            
                Kudos Received
            
        
                9
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 19908 | 07-03-2019 08:34 AM | |
| 2452 | 10-31-2018 02:16 AM | |
| 13997 | 05-11-2018 01:31 AM | |
| 9893 | 02-21-2018 03:25 AM | |
| 3462 | 02-21-2018 01:18 AM | 
			
    
	
		
		
		12-28-2017
	
		
		03:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Had to go with sentry and hdfs. Sentry is tightly coupled with hdfs and has a mandatory config "HDFS Service" so you need to have hdfs. you can configure hdfs and sentry and stop hdfs once sentry is completely configured 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-28-2017
	
		
		03:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 hi @ebeb     You need to disable Sentry Service in kafka configuration if you are not using it. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-15-2017
	
		
		07:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,     We have a new cluster with CDH 5.11.2. It has only kafka and zookeeper services. Zookeeper ocassionally goes bad with below error causing the kafka brokers to be in green in cloudera manager but are actually bad.     Zookeeper log:     2017-12-14 18:31:00,004 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /xx.xxx.x.xx:33488 which had sessionid 0x25fd2436dd488c5  2017-12-14 18:31:03,691 ERROR org.apache.zookeeper.server.quorum.LearnerHandler: Unexpected exception causing shutdown while sock still open  java.net.SocketTimeoutException: Read timed out  at java.net.SocketInputStream.socketRead0(Native Method)  at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)  at java.net.SocketInputStream.read(SocketInputStream.java:171)  at java.net.SocketInputStream.read(SocketInputStream.java:141)  at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)  at java.io.BufferedInputStream.read(BufferedInputStream.java:265)  at java.io.DataInputStream.readInt(DataInputStream.java:387)  at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)  at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)  at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99)  at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:499)  2017-12-14 18:31:03,691 WARN org.apache.zookeeper.server.quorum.LearnerHandler: ******* GOODBYE /xx.xxx.x.xx:58030 ********  2017-12-14 18:31:18,000 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x360200cd8173815, timeout of 30000ms exceeded  2017-12-14 18:31:18,000 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x360200cd8173815     Kafka Log:     2017-12-14 18:31:11,274 INFO org.apache.curator.framework.state.ConnectionStateManager: State change: SUSPENDED  2017-12-14 18:31:17,275 ERROR org.apache.curator.ConnectionState: Connection timed out for connection string (xxxxxxxxx.devkafka.pre.corp:2181,xxxxxxxxx.devkafka.pre.corp:2181,xxxxxxxxx.devkafka.pre.corp:2181/kafkadev) and timeout (6000) / elapsed (6002)  org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss  at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:195)  at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:87)  at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115)  at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:821)  at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:807)  at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:63)  at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267)  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)  at java.lang.Thread.run(Thread.java:748)  2017-12-14 18:31:18,275 ERROR org.apache.curator.ConnectionState: Connection timed out for connection string (xxxxxxxxx.devkafka.pre.corp:2181,xxxxxxxxx.devkafka.pre.corp:2181,xxxxxxxxx.devkafka.pre.corp:2181/kafkadev) and timeout (6000) / elapsed (7002)  org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss  at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:195)  at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:87)  at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115)  at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:821)  at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:807)  at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:63) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Kafka
- 
						
							
		
			Apache Zookeeper
			
    
	
		
		
		12-15-2017
	
		
		02:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @pdvorak     We did try going with that approach but in our streaming cluster, we have only kafka and zookeeper services. When tried adding sentry, it was asking for hdfs service also to be presnt to add sentry. Not sure why hdfs is required for sentry to be available!!! I tried adding ACL's from command line, ACL's were created but that did not work. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-14-2017
	
		
		06:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     Does kafka2.2.0 in CDH 5.11.2 support ACL's on topics?? Can we use AD Users and groups for this ACL's? Do we have any documents for this? We have kerberos enables. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Kafka
			
    
	
		
		
		12-01-2017
	
		
		04:17 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Tomas79     Thanks, This worked. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-01-2017
	
		
		04:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  We would like to use authorization of kafka topics in CDH 5.11.2. Is it mandatory to have sentry for this or can we use regular kafka ACL's? We have kerberos and AD integrated as well. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Kafka
- 
						
							
		
			Apache Sentry
- 
						
							
		
			Kerberos
			
    
	
		
		
		11-23-2017
	
		
		02:12 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Tried doing using kafka keytab. Below is the topic created with kafka service keytab. But its the same issue  Topic:Hello1 PartitionCount:2 ReplicationFactor:2 Configs:min.insync.replicas=2  Topic: Hello1 Partition: 0 Leader: 35 Replicas: 35,38 Isr: 35,38  Topic: Hello1 Partition: 1 Leader: 38 Replicas: 38,33 Isr: 38,33  Producer:  =======kafka-verifiable-producer.sh --topic Hello1 --broker-list server1.kafka2.pre.corp:9092,server2.kafka2.pre.corp:9092 --producer.config client.properties  17/11/23 09:58:07 INFO utils.AppInfoParser: Kafka version : 0.10.2-kafka-2.2.0 17/11/23 09:58:07 INFO utils.AppInfoParser: Kafka commitId : unknown  1 2 3 4 45  17/11/23 09:58:25 INFO producer.KafkaProducer: Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. {"name":"shutdown_complete"} {"sent":1,"name":"tool_data","avg_throughput":0.0,"target_throughput":-1,"acked":0}  Consumer:  kafka-console-consumer --topic Hello1 --from-beginning --zookeeper server.kafka2.pre.corp:2181,server.kafka2.pre.corp:2181/kafka2dc1pre --consumer.config consumer.properties  17/11/23 09:59:04 INFO utils.ZKCheckedEphemeral: Creating /consumers/console-consumer-59653/ids/console-consumer-59653_server1.kafka2.pre.corp-1511431144657-52e4b1e7 (is it secure? false)  17/11/23 09:59:04 INFO utils.ZKCheckedEphemeral: Result of znode creation is: OK  .  .  17/11/23 09:59:05 INFO consumer.ZookeeperConsumerConnector: [console-consumer-59653_server1.kafka2.pre.corp-1511431144657-52e4b1e7], end rebalancing consumer console-consumer-59653_server1.kafka2.pre.corp-1511431144657-52e4b1e7 try #0  17/11/23 09:59:05 INFO consumer.ZookeeperConsumerConnector: [console-consumer-59653_server1.kafka2.pre.corp-1511431144657-52e4b1e7], Creating topic event watcher for topics Hello1  17/11/23 09:59:05 INFO consumer.ZookeeperConsumerConnector: [console-consumer-59653_server1.kafka2.pre.corp-1511431144657-52e4b1e7], Topics to consume = ArrayBuffer(Hello1)  17/11/23 09:59:05 WARN consumer.ConsumerFetcherManager$LeaderFinderThread: [console-consumer-59653_server1.kafka2.pre.corp-1511431144657-52e4b1e7-leader-finder-thread], Failed to find leader for Set(Hello1-0, Hello1-1) kafka.common.BrokerEndPointNotAvailableException: End point with security protocol PLAINTEXT not found for broker 33 at kafka.client.ClientUtils$anonfun$getPlaintextBrokerEndPoints$1$anonfun$apply$5.apply(ClientUtils.scala:146) at kafka.client.ClientUtils$anonfun$getPlaintextBrokerEndPoints$1$anonfun$apply$5.apply(ClientUtils.scala:146) at scala.Option.getOrElse(Option.scala:121) at kafka.client.ClientUtils$anonfun$getPlaintextBrokerEndPoints$1.apply(ClientUtils.scala:146) at kafka.client.ClientUtils$anonfun$getPlaintextBrokerEndPoints$1.apply(ClientUtils.scala:142) at scala.collection.TraversableLike$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.client.ClientUtils$.getPlaintextBrokerEndPoints(ClientUtils.scala:142) at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:67) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)  17/11/23 09:59:05 INFO consumer.ConsumerFetcherManager: [ConsumerFetcherManager-1511431144680] Added fetcher for partitions ArrayBuffer()  17/11/23 09:59:05 WARN consumer.ConsumerFetcherManager$LeaderFinderThread: [console-consumer-59653_server1.kafka2.pre.corp-1511431144657-52e4b1e7-leader-finder-thread], Failed to find leader for Set(Hello1-0, Hello1-1) kafka.common.BrokerEndPointNotAvailableException: End point with security protocol PLAINTEXT not found for broker 33 at kafka.client.ClientUtils$anonfun$getPlaintextBrokerEndPoints$1$anonfun$apply$5.apply(ClientUtils.scala:146) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-22-2017
	
		
		05:54 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, 
 We have recently started using kafka 0.10.2 but are unable to produce any messages or consumer them. It has kerberos enabled. Below are my configs. There is no error and kafka data log also doesn't have any entry but the index gets updated whenever we run an producer. 
 kafka-console-producer --broker-list kafka1.test.com:9092,kafka2.test.com:9092 --producer.config client.properties --topic TEST 
 kafka-console-consumer --topic TEST --from-beginning --bootstrap-server kafka1.test.com:9092,kafka2.test.com:9092 --consumer.config consumer.properties 
 jass: 
 KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; }; 
 client.properties/consumer.properties: 
 security.protocol=SASL_PLAINTEXT 
 sasl.kerberos.service.name=kafka 
   
   
   
   
 17/11/22 12:43:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 4 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.  
 17/11/22 12:44:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 2 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.  
 17/11/22 12:45:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 5 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.  
 17/11/22 12:46:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 4 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Kafka
- 
						
							
		
			Kerberos
			
    
	
		
		
		11-16-2017
	
		
		03:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The default size for Write Ahead Log (WAL) segments has been reduced from 64MB to 8MB in CDH 5.12. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- Next »
 
        













