Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 995 | 06-04-2025 11:36 PM | |
| 1567 | 03-23-2025 05:23 AM | |
| 782 | 03-17-2025 10:18 AM | |
| 2816 | 03-05-2025 01:34 PM | |
| 1859 | 03-03-2025 01:09 PM |
05-17-2019
11:53 AM
@Muhammad waqas I saw some discrepancy in the krb5.conf please copy and paste this one which I have updated with your entries [libdefaults]
default_realm = ABCDATA.ORG
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = true
udp_preference_limit = 1000000
default_tgs_enctypes = aes256-cts-hmac-sha1-96
default_tkt_enctypes = aes256-cts-hmac-sha1-96
permitted_enctypes = aes256-cts-hmac-sha1-96
kdc_timeout = 3000
[realms]
ABCDATA.ORG = {
kdc = cloudera.abcdata.org
admin_server = cloudera.abcdata.org
default_domain = ABCDATA.ORG
}
[domain_realm]
.abcdata.org = ABCDATA.ORG
abcdata.org = ABCDATA.ORG
[logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
Problem [TOKEN, KERBEROS]; Host Details: local host is: "FQDN/X.X.X.X"; destination host is: "FQDN":PORT;" The above shows your hostname is not configured # kadmin.local
Authenticating as principal root/admin@ABCDATA.ORG with password.
kadmin.local: listprincs
Sample output on Hortonworks nm/cloudera.abcdata.org@ABCDATA.ORG
nn/cloudera.abcdata.org@ABCDATA.ORG
oozie/cloudera.abcdata.org@ABCDATA.ORG
rangeradmin/cloudera.abcdata.org@ABCDATA.ORG
rangerlookup/cloudera.abcdata.org@ABCDATA.ORG
rangertagsync/cloudera.abcdata.org@ABCDATA.ORG
rangerusersync/cloudera.abcdata.org@ABCDATA.ORG
rm/cloudera.abcdata.org@ABCDATA.ORG Can you share the output of $ hostname -f Does it match the entries in /etc/hosts? the format should be IP:FQDN:ALIAS After the validation and correction please regenerate the keytabs using Cloudera Manager Admin Console HTH
... View more
05-16-2019
09:38 AM
@Muhammad waqas Can you share the below files? kadm5.acl kdc.conf krb5.conf Switch to user hdfs from root account, please beware your output won't be exactly the same # su - hdfs Check if you have a valid ticket? You shouldn't have if the output isn't like below $ klist
klist: No credentials cache found (filename: /tmp/krb5cc_1013) Destroy the Kerberos ticket $ kdestroy Get the principal attached to the hdfs keytab $ klist -kt /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
1 10/11/2018 10:48:48 hdfs-host@HADOOP.COM
1 10/11/2018 10:48:48 hdfs-host@HADOOP.COM
1 10/11/2018 10:48:48 hdfs-host@HADOOP.COM
1 10/11/2018 10:48:48 hdfs-host@HADOOP.COM
1 10/11/2018 10:48:48 hdfs-host@HADOOP.COM Grab a ticket by appending keytab + principal as below $ kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-host@HADOOP.COM Now you should have a validate kerberos ticket $ klist
Ticket cache: FILE:/tmp/krb5cc_1013
Default principal: hdfs-host@HADOOP.COM
Valid starting Expires Service principal
05/16/2019 11:24:11 05/17/2019 11:24:11 krbtgt/HADOOP.COM@HADOOP.COM
Try to access hdfs $ hdfs dfs -ls / The above command should not error out.
... View more
05-16-2019
09:12 AM
@Mazen Elshayeb Can you capture and share your screenshot? Firstly can you ensure your kdc and kadmin are started? Did you run this step? If not please do that while logged in as root, the output should look like below # kadmin.local -q "addprinc admin/admin" Desired output Authenticating as principal root/admin@HADOOP.COM with password.
WARNING: no policy specified for admin/admin@HADOOP.COM; defaulting to no policy
Enter password for principal "admin/admin@HADOOP.COM": {password_used_during_creation}
Re-enter password for principal "admin/admin@HADOOP.COM": {password_used_during_creation}
Principal "admin/admin@HADOOP.COM" created. Restart kdc (Centos please adapt accordingly) # /etc/rc.d/init.d/krb5kdc start Desired output Starting Kerberos 5 KDC: [ OK ] Restart kadmin # /etc/rc.d/init.d/kadmin start Desired output Starting Kerberos 5 Admin Server: [ OK ] Now continue with Ambari kerberization wizard using the admin/admin@HADOOP.COM with password earlier set That should work
... View more
05-16-2019
06:20 AM
@Mazen Elshayeb That's good news the principal admin should be admin/admin@HADOOP.COM and the password is the magic password you used when creating the Kerberos database. You must have gotten a warning saying keep the password safely 🙂 Please proceed and revert!
... View more
05-16-2019
06:07 AM
@Manjunath P N You are responding to an old thread I don't think you will get answers that fast it's better to start a new thread
... View more
05-15-2019
12:41 PM
@Ashok kumar Thangarathinam You can add new properties by clicking on the + to add new (Key/Value) client properties HTH
... View more
05-14-2019
08:49 PM
@Michael Bergamini Can you connect to zookeeper and share the output of /usr/hdp/3.1.0.0-78/zookeeper/bin/zkCli.sh Desired output WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] ls / [cluster, controller, brokers, storm, zookeeper, infra-solr, hbase-unsecure, admin, isr_change_notification, log_dir_event_notification, controller_epoch, hiveserver2, hiveserver2-leader, rmstore, atsv2-hbase-unsecure, consumers, ambari-metrics-cluster, latest_producer_id_block, config] Please revert
... View more
05-14-2019
07:40 PM
@Madhura Mhatre If you increase the size of the same ibdata disk mount then you don't need to update any metadata because the pointers will be intact in the metastore. Make sure you shut down the all the databases on the mount point before increasing the size. Happy hadooping
... View more
05-14-2019
06:49 PM
1 Kudo
@Madhura Mhatre From the output, you can clearly see that its the Hive database that has grown. Having said that @Jay Kumar SenSharma's solution would work if it was the Ambari database that was huge where you can purge the history which is not the case. You cannot purge the hive database as you will lose data but you can create a more compact table with CTAS see below. The end result you would have manageable table size on disk Option 1 set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
create table if not exists t1 (a int, b int) partitioned by (c int); -- your original table
create table t1orc (a int, b int) partitioned by (c int) stored as ORC; -- your compressed table
insert into table t1orc partition(c) select a, b, c from t1; CTAS has these restrictions: The target table cannot be a partitioned table.
The target table cannot be an external table.
The target table cannot be a list bucketing table. Option 2 The other solution is to change the location and increase the size of the mount point be aware to maintain the same path as the metadata maintains that record in the Hive metastore So you will need to follow update the location as documented here Hope that helps
... View more