Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 844 | 06-04-2025 11:36 PM | |
| 1426 | 03-23-2025 05:23 AM | |
| 714 | 03-17-2025 10:18 AM | |
| 2555 | 03-05-2025 01:34 PM | |
| 1671 | 03-03-2025 01:09 PM |
12-04-2017
02:15 PM
@Sedat Kestepe Stop the Hdfs service if its running. Start only the journal nodes (as they will need to be made aware of the formatting) On the namenode (as user hdfs) # su - hdfs Format the namenode $ hadoop namenode -format Initialize the Edits (for the journal nodes) $ hdfs namenode -initializeSharedEdits -force Format Zookeeper (to force zookeeper to reinitialise) $ hdfs zkfc -formatZK -force Using Ambari restart the namenode If you are running HA name node then On the second namenode Sync (force synch with first namenode) $ hdfs namenode -bootstrapStandby -force On every datanode clear the data directory which is already done in your case Restart the HDFS service Hope that helps
... View more
12-04-2017
01:25 PM
@Michael Bronson If this is a production environment I would advise you to contact hortonworks support. How many nodes in your cluster? How many Journalnodes you have in cluster ? Make sure you have odd number. Could you also confirm whether at any point after enabling the HA the Active and Standby namenodes ever functioned? Your log messages indicates that there was a timeout condition when the NameNode attempted to call the JournalNodes. The NameNode must successfully call a quorum of JournalNodes: at least 2 out of 3. This means that the call timed out to at least 2 out of 3 of them. This is a fatal condition for the NameNode, so by design, it aborts. There are multiple potential reasons for this timeout condition. Reviewing logs from the NameNodes and JournalNodes would likely reveal more details. If its a none critical cluster ,you can follow the below steps Stop the Hdfs service if its running. Start only the journal nodes (as they will need to be made aware of the formatting) On the first namenode (as user hdfs) # su - hdfs Format the namenode $ hadoop namenode -format Initialize the Edits (for the journal nodes) $ hdfs namenode -initializeSharedEdits -force Format Zookeeper (to force zookeeper to reinitialise) $ hdfs zkfc -formatZK -force Using Ambari restart that first namenode On the second namenode Sync (force synch with first namenode) $ hdfs namenode -bootstrapStandby -force On every datanode clear the data directory Restart the HDFS service Hope that helps
... View more
12-04-2017
12:02 PM
@Michael Bronson From your screenshot, both namenodes are down hence the failure of the failover commands. Since you enabled NameNode HA using Ambari and the ZooKeeper service instances and ZooKeeper FailoverControllers to be up and running. Just restart the name nodes but its bizarre that none is marked (Active and Standby). Depending on the cluster use DEV or Prod please take the appropriate steps to restart the namenode because your cluster is now unusable anyway. Using Ambari use the HDFS restart all command under Service actions ,
... View more
12-04-2017
11:16 AM
@Michael Bronson
The correct syntax should be
haadmin -failover -forceactive namenode1(active) namenode1(standby) Note the active and standby inputs Hope that helps
... View more
11-28-2017
08:16 AM
@Anurag Mishra What is the error you get when you try restarting Ambari? check out in /var/log/ambari-server/ambari-server.log Please attach the log
... View more
11-26-2017
10:43 PM
@M B If you observe carefully, the encryption types in your krb5.conf have been commented out !!! The see the valid encryption types check your kdc.conf see below # cat /var/kerberos/krb5kdc/kdc.conf [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
NANDOS.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} From the output, this is your original master key because the KVNO is 1 # kdb5_util list_mkeys
Master keys for Principal: K/M@NANDOS.COM
KVNO: 1, Enctype: aes256-cts-hmac-sha1-96, Active on: Thu Jan 01 01:00:00 CET 1970 * Output looks correct # kadmin.local
Authenticating as principal root/admin@NANDOS.COM with password.
kadmin.local: getprinc hive/test.nandos.com@NANDOS.COM
Principal: hive/test.nandos.com@NANDOS.COM
Expiration date: [never]
Last password change: Thu Aug 24 15:42:17 CEST 2017
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 0 days 00:00:00
Last modified: Thu Aug 24 15:42:17 CEST 2017 (root/admin@NANDOS.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 8
Key: vno 1, aes256-cts-hmac-sha1-96
Key: vno 1, aes128-cts-hmac-sha1-96
Key: vno 1, des3-cbc-sha1
Key: vno 1, arcfour-hmac
Key: vno 1, camellia256-cts-cmac
Key: vno 1, camellia128-cts-cmac
Key: vno 1, des-hmac-sha1
Key: vno 1, des-cbc-md5
MKey: vno 1
Attributes:
Policy: [none]
kadmin.local: Can you do the following as hive user, $ kdestroy Check for the correct principal $ klist -kt /etc/security/keytabs/hive.service.keytab Then using the correct principal run kinit $kinit -kt /etc/security/keytabs/hive.service.keytab hive/hdata1.xxxx.local@xxxx.LOCAL Check the validity of the ticket $klist Try accessing $ beeline Connect with the correct principal !connect jdbc:hive2://localhost:10000/default;principal=hive/hdata1.xxxx.local@xxxx.LOCAL;auth=kerberos That should work please revert
... View more
11-19-2017
09:04 AM
@Vijay Kumar Yadav Can you describe your current setup? The 2 nodes you are same OS and have you done the OS perparation prior? HDP/Ambari versions? Can you paste your ambari/hdp.repo If using public repo have you enabled internet access? Memory allocated? Please revert
... View more
11-16-2017
09:38 PM
@Rahul Narayanan Can you check whether your ambari & agent are running th esame version? # rpm -qa | grep ambari sample output # rpm -qa | grep ambari
ambari-metrics-monitor-2.5.2.0-298.x86_64
ambari-agent-2.5.2.0-298.x86_64
ambari-metrics-grafana-2.5.2.0-298.x86_64
ambari-infra-solr-2.5.2.0-298.noarch
ambari-infra-solr-client-2.5.2.0-298.noarch
ambari-metrics-collector-2.5.2.0-298.x86_64
ambari-server-2.5.2.0-298.x86_64
ambari-metrics-hadoop-sink-2.5.2.0-298.x86_64
... View more