Member since 
    
	
		
		
		08-08-2013
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                339
            
            
                Posts
            
        
                132
            
            
                Kudos Received
            
        
                27
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 16124 | 01-18-2018 08:38 AM | |
| 2018 | 05-11-2017 06:50 PM | |
| 10438 | 04-28-2017 11:00 AM | |
| 4153 | 04-12-2017 01:36 AM | |
| 3227 | 02-14-2017 05:11 AM | 
			
    
	
		
		
		12-14-2019
	
		
		11:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Jibinjks  ,  Was this issue resolved? if yes, can you update the solution.  I am stuck in a similar problem. Export did not work for me too.     Br  Sandeep 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-09-2018
	
		
		09:13 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							    Can somebody please help ?      kinit: Failed to store credentials: Internal credentials cache error (filename: /hue_krb5_ccache) while getting initial credentials     [09/Dec/2018 21:06:24 -0800] kt_renewer   INFO     Reinitting kerberos retry attempt 2 from keytab /bin/kinit -k -t /run/cloudera-scm-agent/process/450-hue-KT_RENEWER/hue.keytab -c /hue_krb5_ccache hue/kabo1.unraveldatalab.com@unravel.COM  [09/Dec/2018 21:06:24 -0800] kt_renewer   ERROR    Couldn't reinit from keytab! `kinit' exited with 1.     kinit: Failed to store credentials: Internal credentials cache error (filename: /hue_krb5_ccache) while getting initial credentials     [09/Dec/2018 21:06:24 -0800] kt_renewer   ERROR    FATAL: max_retries of 3 reached. Exiting...  [09/Dec/2018 21:06:28 ] settings     INFO     Welcome to Hue 3.9.0  [09/Dec/2018 21:06:31 -0800] __init__     INFO     Couldn't import snappy. Support for snappy compression disabled.  [09/Dec/2018 21:06:31 -0800] kt_renewer   INFO     Reinitting kerberos retry attempt 0 from keytab /bin/kinit -k -t /run/cloudera-scm-agent/process/450-hue-KT_RENEWER/hue.keytab -c /hue_krb5_ccache hue/kabo1.unraveldatalab.com@unravel.COM  [09/Dec/2018 21:06:31 -0800] kt_renewer   ERROR    Couldn't reinit from keytab! `kinit' exited with 1.     kinit: Failed to store credentials: Internal credentials cache error (filename: /hue_krb5_ccache) while getting initial credentials     [09/Dec/2018 21:06:34 -0800] kt_renewer   INFO     Reinitting kerberos retry attempt 1 from keytab /bin/kinit -k -t /run/cloudera-scm-agent/process/450-hue-KT_RENEWER/hue.keytab -c /hue_krb5_ccache hue/kabo1.unraveldatalab.com@unravel.COM  [09/Dec/2018 21:06:34 -0800] kt_renewer   ERROR    Couldn't reinit from keytab! `kinit' exited with 1.     kinit: Failed to store credentials: Internal credentials cache error (filename: /hue_krb5_ccache) while getting initial credentials     [09/Dec/2018 21:06:37 -0800] kt_renewer   INFO     Reinitting kerberos retry attempt 2 from keytab /bin/kinit -k -t /run/cloudera-scm-agent/process/450-hue-KT_RENEWER/hue.keytab -c /hue_krb5_ccache hue/kabo1.unraveldatalab.com@unravel.COM  [09/Dec/2018 21:06:37 -0800] kt_renewer   ERROR    Couldn't reinit from keytab! `kinit' exited with 1.     kinit: Failed to store credentials: Internal credentials cache error (filename: /hue_krb5_ccache) while getting initial credentials     [09/Dec/2018 21:06:37 -0800] kt_renewer   ERROR    FATAL: max_retries of 3 reached. Exiting...  \              [root@kabo1 ~]# cat /etc/krb5.conf  [libdefaults]  default_realm = unravel.COM  dns_lookup_kdc = false  dns_lookup_realm = false  ticket_lifetime = 24h   renew_lifetime = 7d  forwardable = true  default_tkt_enctypes = aes256-cts-hmac-sha1-96  default_tgs_enctypes = aes256-cts-hmac-sha1-96  permitted_enctypes = aes256-cts-hmac-sha1-96  allow_weak_crypto = true  udp_preference_limit = 1  kdc_timeout = 3000  [realms]  unravel.COM = {  kdc = kabo1.unraveldatalab.com  admin_server = kabo1.unraveldatalab.com  }  [domain_realm]  [root@kabo1 ~]# cat  /var/kerberos/krb5kdc/kdc.conf   [kdcdefaults]  kdc_ports = 88  kdc_tcp_ports = 88     [realms]  EXAMPLE.COM = {    #master_key_type = aes256-cts    max_renewable_life = 7d 0h 0m 0s    acl_file = /var/kerberos/krb5kdc/kadm5.acl    dict_file = /usr/share/dict/words    admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab    supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal    default_principal_flags = +renewable  }          
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-13-2018
	
		
		03:54 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi Harsh,     Thanks alot for your support.  Really appreciate.     I was able to make hbase stable by adding the line mentioned by you but the only one change was reuiqred.     -Dzookeeper.skipACL=yes     we need to give "yes" not true.  It worked for me.     Thanks for making my cluster happpy.     Regards  Ayush 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-28-2019
	
		
		08:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Rodrigo Hjort ,  did you solve this problem and if yes, how ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-10-2018
	
		
		09:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							  @Mudit Kumar    You have deployed and secured your multi-node-cluster with an MIT KDC running on a Linux box (dedicated or not), this can also be applied on a single node cluster.  Below is a step by step procedure  Assumption     KDC is running     KDC is created     KDC user and master password is available     REALM: DEV.COM     Users : user1,user2,user3-user5    Edge node: for users Kerberos     Admin user is root or sudoer      A good solution security-wise is to copy the generated keytabs to the users'home directory. If these are local Unix users NOT Active directory then create the keytabs in e.g /tmp and later copy them to their respective home directories and make sure to change the correct permissions on the keytabs.   You will notice a node dedicated to users EDGE NODE, all client software are installed here and not on the data or name nodes!   Change directory to tmp  # cd /tmp   With root access, no need for sudo, specify the password for user1  # sudo kadmin.local 
Authenticating as principal root/admin@DEV.COM with password. 
kadmin.local: addprinc user1@DEV.COM 
WARNING: no policy specified for user1@DEV.COM; defaulting to no policy 
Enter password for principal "user1@DEV.COM": 
Re-enter password for principal "user1@DEV.COM": 
Principal "user1@DEV.COM" created.   Do the above step for for all the other users too   addprinc user2@DEV.COM 
addprinc user3@DEV.COM 
addprinc user4@DEV.COM 
addprinc user5@DEV.COM   The keytabs with be generated in the current directory   Generate keytab for user1   The keytab will be generated in the current directory  # sudo ktutil 
ktutil: addent -password -p user1@DEV.COM -k 1 -e RC4-HMAC 
Password for user1@DEV.COM: 
ktutil: wkt user1.keytab 
ktutil: q   You MUST repeat the above for all the 5 users   Copy the newly created keytab to the user's home directory, in this example I have copied the keytab to /etc/security/keytabs   # cp user1.keytab  /etc/security/keytabs  Change ownership & permission here user1 belongs to hadmin  group  # chown user1:hadmin user1.keytab   Again do the above for all the other users. A good technical and security best practice is to copy the keytabs from the kdc to edgenode respective home directories and change the ownership of the  keytabs   Validate the principals in this example the keytabs are in /etc/security/keytabs   # klist -kt /etc/security/keytabs/user1.keytab 
Keytab name: FILE:/etc/security/keytabs/user1.keytab 
KVNO         Timestamp                   Principal 
-----------  ------------------- ------------------------------------------------------ 
1            05/10/2018 10:46:27         user1@DEV.COM   To ensure successful ticket attribution the user should validate the principal  see example below and use it grab a ticket , the principal will be concatenated with the keytab  when running the kinit  # klist -kt /etc/security/keytabs/user1.keytab 
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO     Timestamp                 Principal
-------- ------------------------ --------------------------------------------------------
1        05/10/18 01:00:50        user1@DEV.COM
....    ..................        ..............
1        05/10/18 01:00:50        user1@DEV.COM
  Test the new user1 should try grabbing a Kerberos ticket  (keytab + principal)  # kinit -kt /etc/security/keytabs/user1.keytab  user1@DEV.COM   The below command should show the validity of the Kerberos ticket   # klist 
Ticket cache: FILE:/tmp/krb5cc_0 
Default principal: user1@DEV.COM 
Valid starting             Expires               Service principal 
05/10/2018 10:53:48        05/11/2018 10:53:48   krbtgt/DEV.COM@DEV.COM   You should be okay now access and successfully run jobs on the cluster 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-18-2018
	
		
		08:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks, issue has been resolved. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-17-2018
	
		
		02:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							@Gerd Koenig Is your broker using the jaas (username/password) you created?   The SASL/PLAIN configuration is well documented here, Do ensure the username and password are in the jaas which broker loading in classpath. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-09-2017
	
		
		02:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 I have resolved the issue for Solr.  1: I replaced solrconfig.xml with solrconfig.xml.secure  2: solrctl instancedir --update employee  /home/Solr/employee/conf/  3: solrctl collection --reload employee       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-17-2018
	
		
		02:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I also came across this error.    I created the kafka_server_jass.conf manually and put it under directory /usr/hdf/current/kafka-broker/config/. And then In Ambari kafka-env template, add the path to file kafka_server_jass.conf to environment variable KAFKA_OPTS like this:  export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/hdf/current/kafka-broker/config/kafka_server_jaas.conf"  With this settings, kafka broker can start up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-27-2019
	
		
		10:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Bryan,  Thanks for your inputs, that helped me understand the properties to setup with SASL_PLAINTEXT .  I'm currently working on a project, that using NiFi publishKafka_0_10 processor with eventHub, from Microsoft doc (  https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs#send-and-receive-messages-with-kafka-in-event-hubs )  we need to map below configurations to the properties in publishKafka_0_10 processor  bootstrap.servers={YOUR.EVENTHUBS.FQDN}:9093  security.protocol=SASL_SSL  sasl.mechanism=PLAIN  sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";      I've tried to use SASL_PLAINTTEXT(as SSL is not an option in our test environment), and configured as bleow. However, it still cannot connect to eventHub, keep prompting me error "TimeoutException: Failed to update metadata after 5000 m"      Can you please help review the properties I setup, perhaps there are something wrong in it, i've struggled on this few days, looking forward to your response. Thanks!   
						
					
					... View more