Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 612 | 06-04-2025 11:36 PM | |
| 1181 | 03-23-2025 05:23 AM | |
| 585 | 03-17-2025 10:18 AM | |
| 2190 | 03-05-2025 01:34 PM | |
| 1376 | 03-03-2025 01:09 PM |
12-04-2017
11:16 AM
@Michael Bronson
The correct syntax should be
haadmin -failover -forceactive namenode1(active) namenode1(standby) Note the active and standby inputs Hope that helps
... View more
11-28-2017
08:16 AM
@Anurag Mishra What is the error you get when you try restarting Ambari? check out in /var/log/ambari-server/ambari-server.log Please attach the log
... View more
11-26-2017
10:43 PM
@M B If you observe carefully, the encryption types in your krb5.conf have been commented out !!! The see the valid encryption types check your kdc.conf see below # cat /var/kerberos/krb5kdc/kdc.conf [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
NANDOS.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} From the output, this is your original master key because the KVNO is 1 # kdb5_util list_mkeys
Master keys for Principal: K/M@NANDOS.COM
KVNO: 1, Enctype: aes256-cts-hmac-sha1-96, Active on: Thu Jan 01 01:00:00 CET 1970 * Output looks correct # kadmin.local
Authenticating as principal root/admin@NANDOS.COM with password.
kadmin.local: getprinc hive/test.nandos.com@NANDOS.COM
Principal: hive/test.nandos.com@NANDOS.COM
Expiration date: [never]
Last password change: Thu Aug 24 15:42:17 CEST 2017
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 0 days 00:00:00
Last modified: Thu Aug 24 15:42:17 CEST 2017 (root/admin@NANDOS.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 8
Key: vno 1, aes256-cts-hmac-sha1-96
Key: vno 1, aes128-cts-hmac-sha1-96
Key: vno 1, des3-cbc-sha1
Key: vno 1, arcfour-hmac
Key: vno 1, camellia256-cts-cmac
Key: vno 1, camellia128-cts-cmac
Key: vno 1, des-hmac-sha1
Key: vno 1, des-cbc-md5
MKey: vno 1
Attributes:
Policy: [none]
kadmin.local: Can you do the following as hive user, $ kdestroy Check for the correct principal $ klist -kt /etc/security/keytabs/hive.service.keytab Then using the correct principal run kinit $kinit -kt /etc/security/keytabs/hive.service.keytab hive/hdata1.xxxx.local@xxxx.LOCAL Check the validity of the ticket $klist Try accessing $ beeline Connect with the correct principal !connect jdbc:hive2://localhost:10000/default;principal=hive/hdata1.xxxx.local@xxxx.LOCAL;auth=kerberos That should work please revert
... View more
11-19-2017
09:04 AM
@Vijay Kumar Yadav Can you describe your current setup? The 2 nodes you are same OS and have you done the OS perparation prior? HDP/Ambari versions? Can you paste your ambari/hdp.repo If using public repo have you enabled internet access? Memory allocated? Please revert
... View more
11-16-2017
09:38 PM
@Rahul Narayanan Can you check whether your ambari & agent are running th esame version? # rpm -qa | grep ambari sample output # rpm -qa | grep ambari
ambari-metrics-monitor-2.5.2.0-298.x86_64
ambari-agent-2.5.2.0-298.x86_64
ambari-metrics-grafana-2.5.2.0-298.x86_64
ambari-infra-solr-2.5.2.0-298.noarch
ambari-infra-solr-client-2.5.2.0-298.noarch
ambari-metrics-collector-2.5.2.0-298.x86_64
ambari-server-2.5.2.0-298.x86_64
ambari-metrics-hadoop-sink-2.5.2.0-298.x86_64
... View more
11-16-2017
02:21 PM
@Rahul Narayanan Can you run the commands one after the other and proceed with your cluster creation # ambari-server stop
# ambari-server reset
# ambari-server start Hope that helps
... View more
11-16-2017
12:12 PM
1 Kudo
@Bala Vignesh N V hadoop fs has been deprecated, From hadoop 2.0 use hdfs dfs instead.Having said that hadoop fs -test -[defsz] URI is used to check the hdfs and it takes the below options: -d: f the path is a directory, return 0.
-e: if the path exists, return 0.
-f: if the path is a file, return 0.
-s: if the path is not empty, return 0.
-z: if the file is zero length, return 0. Example Check for valid directories in hdfs $ hdfs dfs -ls /
Found 7 items
drwxrwx--- - ambari-qa hdfs 0 2017-10-19 14:13 /user/ambari-qa
drwxr-xr-x - druid hdfs 0 2017-10-19 20:49 /user/druid
drwxr-xr-x - hbase hdfs 0 2017-10-19 13:43 /user/hbase
drwxr-xr-x - hcat hdfs 0 2017-10-19 13:53 /user/hcat
drwxr-xr-x - hive hdfs 0 2017-10-19 13:53 /user/hive
drwxrwxr-x - oozie hdfs 0 2017-10-19 13:57 /user/oozie
drwxr-xr-x - zeppelin hdfs 0 2017-10-19 19:25 /user/zeppelin Test whether path exists,the below returns 0 $ hdfs dfs -test -e /user/druid Hope that helps
... View more
11-16-2017
10:58 AM
@Rahul Narayanan Good news, you can now proceed using postgres doesn't cause any problems at all, but if you are a fan of MySQL then see link. if you intend to use the embedded postgres then choose option 1 when running ambari-server setup Using Postgres is straightforward and easy
... View more