Member since 
    
	
		
		
		10-28-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                392
            
            
                Posts
            
        
                7
            
            
                Kudos Received
            
        
                20
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3467 | 03-12-2018 02:28 AM | |
| 5186 | 12-18-2017 11:41 PM | |
| 3640 | 07-17-2017 07:01 PM | |
| 2567 | 07-13-2017 07:20 PM | |
| 8213 | 07-12-2017 08:31 PM | 
			
    
	
		
		
		06-08-2017
	
		
		06:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
 
 
 
 Hi All,I'm trying to transfer data between kafka clusters using Kafka MirrorMaker & running into issues.I've created a consumer.config & producer.config files & using the  command shown below.
</n>
The error indicates - equirement failed: Missing required property 'zookeeper.connect'
------------------------------CommandLine error -----------------------------
 $KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config $KAFKA10_HOME/config/mmConsumer.config --num.streams 2 --producer.config $KAFKA10_HOME/config/mmProducer.config --whitelist="mmtopic"
[2017-06-08 11:32:55,962] ERROR Exception when starting mirror maker. (kafka.tools.MirrorMaker$)
java.lang.IllegalArgumentException: requirement failed: Missing required property 'zookeeper.connect'
  at scala.Predef$.require(Predef.scala:224)
  at kafka.utils.VerifiableProperties.getString(VerifiableProperties.scala:177)
  at kafka.utils.ZKConfig.<init>(ZkUtils.scala:902)
  at kafka.consumer.ConsumerConfig.<init>(ConsumerConfig.scala:101)
  at kafka.consumer.ConsumerConfig.<init>(ConsumerConfig.scala:105)
  at kafka.tools.MirrorMaker$$anonfun$3.apply(MirrorMaker.scala:306)
  at kafka.tools.MirrorMaker$$anonfun$3.apply(MirrorMaker.scala:304)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.immutable.Range.foreach(Range.scala:160)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
  at kafka.tools.MirrorMaker$.createOldConsumers(MirrorMaker.scala:304)
  at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:233)
  at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
Exception in thread "main" java.lang.NullPointerException
  at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:286)
  at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
 
I
 tried adding the option (-zookeeper.connect = localhost:21810), it 
gives error -> zookeeper.connect is not a recognized option
 -----------------------------
CommandLine Error -------------------------
[2017-06-08 11:40:11,033] ERROR Exception when starting mirror maker. (kafka.tools.MirrorMaker$)
joptsimple.UnrecognizedOptionException: zookeeper.connect is not a recognized option
  at joptsimple.OptionException.unrecognizedOption(OptionException.java:108)
  at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:449)
  at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
  at joptsimple.OptionParser.parse(OptionParser.java:381)
  at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:167)
  at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
Exception in thread "main" java.lang.NullPointerException
  at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:286)
  at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
Any ideas on what needs to be done ?
       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		06-02-2017
	
		
		07:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello All, I've HDP 2.5 & Kafka 0.9   I've a sample Kafka consumer program pushing data into Kafka topic, and a producer reading data from Kafka topic.   I'm trying to simulate having one or more Kafka Brokers out of sync (i.e. not in ISR).   The idea is to kill the Leader of a partition, and to see if there is data loss 
because of the Brokers not being in-sync. 
Any tips on how to do that or if anyone has done this kind of testing ?
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		05-15-2017
	
		
		10:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @mqureshi   - thanks for the detailed reply & explanation on this, that really helps clarify the concept.  However, a followup on this .. i've configured SSL/TLS for HDFS, how do i test this & ensure SSL is implemented correctly for HDFS ?  the https Namenode url does not seems to be working, pls see screenshot attached.   Also, attached is the screenshort of the http NameNode url & the configured values of dfs.https.port & dfs.namenode.https-address, in hdfs-site.xml. screen-shot-2017-05-15-at-35026-pm.png screen-shot-2017-05-15-at-35101-pm.png  screen-shot-2017-05-15-at-35035-pm.png
  
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-15-2017
	
		
		09:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @mqureshi, @amarnath reddy pappu ,  thanks, I've use the steps in enable-https-for-hdfs , and done the following  On nwk6  (Node 6, where the nameNode is Installed)  1) Generate the jks file  2) Get the certificate signed (using OpenSSL)  3) make entries in core-site.xml, hdfs-site.xml  4) updated the files -> ssl-server.xml, ssl-client.xml  5) re-started HDFS service  6) have a questions about this next step ->   ----------------------------------------------  Step7:  Make sure you import the CA root to Ambari-server by running "ambari-server setup-security"   -----------------------------------------  Couple of questions on this --  a) when i run -> ambari-server setup-security, i see options given below ... So, do i use the option 5 i.e. Import the certificate to the truststore ?  b) Pls. note -> the truststore & keystores were create on nwk6 (where nameNode is installed), while ambari is installed on nwk7.  So, do the keystore & truststore need to be copied onto nwk7 Or re-created ?  [root@nwk2-bdp-hadoop-07 tmp]# ambari-server setup-security  Using python  /usr/bin/python
Security setup options...
===========================================================================
Choose one of the following options: 
  [1] Enable HTTPS for Ambari server.
  [2] Encrypt passwords stored in ambari.properties file.
  [3] Setup Ambari kerberos JAAS configuration.
  [4] Setup truststore.
  [5] Import certificate to truststore.
===========================================================================
Enter choice, (1-5):   Pls. note - while the the above steps (except for ambari-server setup-security) have gone through fine, the https url  (for NameNode UI, https://<nwk06>:50470) is not working.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-12-2017
	
		
		06:23 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @amarnath reddy pappu - thanks, ..  qq, wrt   3.Nowget the singed cert from CA - file name is/tmp/c6401.crt  How do i get the signed certificate ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2017
	
		
		06:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @mqureshi, @Kuldeep Kulkarni, @Gerd Koenig, @Andrew Ryan - looping you in.. any ideas on this. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2017
	
		
		06:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello - i've a HDP 2.5 cluster (8 node), and i'm trying to enable SSL/TLS for HDFS .. using the following link -> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.3/bk_Security_Guide/content/ch_wire-https.html  i'm trying to create the hostkey using the following command ->  keytool -keystore   /etc/security/clientKeys/keystore.jks -genkey -alias nwk8  The client key ->  /etc/security/clientKeys/keystore.jks is the default entry in file -> /etc/hadoop/2.5.3.0-37/0/ssl-client.xml  This is not available ..   Have some basic questions (since i dont think i understand this yet) - which .jks file should i use ? is that something i get from CA ? What if i use OpenSSL ?  Any inputs on this would be appreciated. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Hortonworks Data Platform (HDP)
			
    
	
		
		
		05-04-2017
	
		
		10:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @mqureshi - i guess what you mentioned makes sense, the error message, however, does not indicate the actual issue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-02-2017
	
		
		11:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @mqureshi -looping you in, any ideas ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        


