Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 923 | 06-04-2025 11:36 PM | |
| 1525 | 03-23-2025 05:23 AM | |
| 756 | 03-17-2025 10:18 AM | |
| 2710 | 03-05-2025 01:34 PM | |
| 1801 | 03-03-2025 01:09 PM |
09-21-2018
07:32 AM
1 Kudo
@Harry Li Any updates on this? Please take the time to answer
... View more
09-19-2018
05:16 AM
@Harry Li Question1 You only need to create identical mount points on both datanodes and this will be mapped to the dfs.datanode.data.dir Question 2 You can have disks of different sizes but only advisable to have identical sizes on all nodes Question3 Haven't tested but I think you can add only one disk the smaller disk will fill up faster, so at some point, they will not allow anymore write operations and the cluster will have no way to balance itself out. HTH
... View more
09-18-2018
08:28 PM
1 Kudo
@Rohit Sharma Thats exactly how it was designed to function.When you create a Kerberos database kdb5_util create -s and generate keytabs, you are creating a something a private and public key it's the DNA database the keytab is like a biometric passport (keytab) that you present to the airport and its check against the passport database (kdc) to check whether it's really you or someone's passport that's exactly what's happening !!!! The KDC database is checking the keytabs against the ABC.COM yet you are trying to present a wrong passport. So there is no way your Kafka is going to function unless. Recreate the KDC database Regenerate the keytabs Edit the kdc.conf,krb5.conf and kadm5.acl HTH , @Rohit Sharma Whe
... View more
09-17-2018
06:32 AM
@Ankita Ghate I think you need to follow these post configuration steps for the client jass file Configuring kafka Authentication with Kerberos
... View more
09-12-2018
01:36 PM
@Ray Donovan Any updates ? *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
09-11-2018
09:46 PM
1 Kudo
@Ray Donovan Surely the support /compatibility matrix gives Donovan reason. Hortonworks provides you with a sleek tool to validate it components Supportmatrix Hortonworks This shows Ambari 2.7.0 supports HDP 2.0.0,HDF3.2.0 and DP 1.2 so HDF 3.1 wont work with Ambari 2.7 and HDP3.0
... View more
09-11-2018
05:18 PM
@Alex M Have you tried running the command with sudo, are you running the installation like another user apart from root?
... View more
09-05-2018
08:50 PM
@Ankita Ghate The log files in your case show /etc/kafka/kafka.keytab is that the correct location for the kafka keytabs? On Centos/RHEl the correct location is usually /etc/security/keytabs/* please use the appropriate location on Ubuntu It seems you have an issue with your kerbers configuration can you validate the following files these paths are valid for Centos/RHEL so adapt them to your ubuntu installation. The contents of krb5.conf .conf in /etc/krb5.conf [libdefaults]
renew_lifetime =7d
forwardable =true
default_realm = MSTORM.COM
ticket_lifetime =24h
dns_lookup_realm =false
dns_lookup_kdc =false
default_ccache_name =/tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
.mstorm.com = MSTORM.COM
mstorm.com = MSTORM.COM
[logging]
default= FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
MSTORM.COM ={
admin_server = kdc.mstorm.com
kdc = kdc.mstorm.com
} The contents of kdc.conf in /var/kerberos/krb5kdc/kdc.conf [kdcdefaults]
kdc_ports =88
kdc_tcp_ports =88
[realms]
MSTORM.COM ={
#master_key_type = aes256-cts
acl_file =/var/kerberos/krb5kdc/kadm5.acl
dict_file =/usr/share/dict/words
admin_keytab =/var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des -hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} The contents of kadm5.acl in /var/kerberos/krb5kdc/kadm5.acl */admin@MSTORM.COM * If you modified the above files restart the KDC and kadmin on linux Centos # service krb5kdc start # service kadmin start Check and use the matching principal for the keytab $ klist -kt /etc/security/keytabs/kafka.service.keytab
$ klist -kt /etc/security/keytabs/kafka.service.keytab
Keytab name: FILE:/etc/security/keytabs/kafka.service.keytab
KVNO Timestamp Principal
-----------------------------------------------------------------------------
111/15/1701:00:50 kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50 kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50 kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50 kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50 kafka/kafka1.hostname.com@MSTORM.COM Then try grabbing a ticket # kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/kafka1.hostname.com@MSTORM.COM
# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal:kafka/kafka1.hostname.com@MSTORM.COM
Valid starting Expires Service principal
09/05/18 22:29:11 09/06/1822:29:11 krbtgt/MSTORM.COM@MSTORM.COM
renew until09/05/1822:29:11 The above should succeed This is how the kafka client jaas.conf fle should look like /usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf KafkaClient{
com.sun.security.auth.module.Krb5Login
Module required use
KeyTab=true keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/kafka1.hostname.com@MSTORM.COM";
}; Now you can retry it should start
... View more
09-04-2018
10:36 AM
@Michael Bronson If you have heavily deleted files then it could still be in the .Trash. HDFS trash is just like the Recycle Bin. Its purpose is to prevent you from unintentionally deleting something. You can enable this feature by setting this property: fs.trash.interval with a number greater than 0 in core-site.xml. After the trash feature is enabled, when you remove something from HDFS by using the rm command, files or directories will not be wiped out immediately; instead, they will be moved to a trash directory (/user/${username}/.Trash, see example). $ hdfs dfs -rm -r /user/bob/5gb file
15/09/18 20:34:48 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
Moved: ‘hdfs://hdp2.6/user/bob/5gb’ to trash at: hdfs://hdp2.6/user/bob/.Trash/Current If you want to empty the trashor just delete the entire trash directory use the HDFS command line utility to do that: $ hdfs dfs -expunge Or use the -skipTrash hdfs dfs -rm -skipTrash /path/to/file/you/want/to/remove/permanently Can you check that the hidden directory .Trash ?
... View more