Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 927 | 06-04-2025 11:36 PM | |
| 1532 | 03-23-2025 05:23 AM | |
| 760 | 03-17-2025 10:18 AM | |
| 2730 | 03-05-2025 01:34 PM | |
| 1809 | 03-03-2025 01:09 PM |
01-01-2021
01:53 PM
@bvishal I provided an answer to such a situation Ambari MySQL database lost please have a look at it and see if that resolves you problem it did for someone in a similar situation. Happy Hadooping
... View more
01-01-2021
01:45 PM
1 Kudo
@brunokatekawa What is happening if my guess is right is you are trying to use your community username/password this will definitely fail. Ambari 2.7.x is available for companies with valid HDP 3.x support licenses you have an active subscription with Cloudera as you can see below access is denies as I used my community login. Here is the HDP support Matrix Starting with the HDP 3.1.5 release, access to HDP repositories requires authentication. To access the binaries, you must first have the required authentication credentials (username and password). Read accessing HDP repositories Hope that helps
... View more
12-19-2020
03:53 PM
@Sud Your question isn't detailed. What sort of access are you thinking of to restrict as read-only data or UI? For Ambari you have the Cluster User role which is a read-only for its services, including configurations, service status, and health alerts. Then the other is about reading data in HDFS where you can use HDFS ACL's which is POSIX compliant like rwx but that won't work for Hive tables. You should know that Ranger controls authorization for the following HDFS,Hive,HBase,Kafka,Knox,YARN,Storm,Atlas and other components depending oon the software HDP,CDH or CDP. Happy hadooping
... View more
12-16-2020
02:28 PM
1 Kudo
@mike_bronson7 To achieve your goal for the 2 issues you will need to edit server.properties of Kafka to add the following line. auto.leader.rebalance.enable = false Then run the below assuming you are having a zookeeper quorum of host1;host2,host3 bin/kafka-preferred-replica-election.sh --zookeeper host1:2181,host2:2181,host3:2181/kafka This should balance your partitions you can validate with bin/kafka-topics.sh --zookeeper host1:2181,host2:2181,host3:2181/kafka --describe For the second issue with the lost broker, you need to create a new broker and update the broker.id with the previous broker's id which was not gone or not recoverable then run $ kafka-preferred-replica-election.sh to balance the topics.
... View more
12-14-2020
12:05 PM
@bvishal You are surely doing something wrong. Kerberzing should take you that long. Follow my previous document and recreate the KDC database by destroying the actual. and share with me the krb5.conf,kadm5.acl, and kdc.conf You are not executing the correct command it's supposed to be # kadmin.local And not # kadmin Happy hadooping
... View more
12-14-2020
01:11 AM
@bvishal You should execute kadmin as root user or with sudo # kadmin Hope that helps
... View more
12-13-2020
01:15 AM
@hanu Can you be precise what platform CDH/CDP or HDP and it's version also confirm whether it's kerberized or not? The more info you give the better
... View more
12-12-2020
07:50 AM
@rampradeep_ All servers in a cluster should be managed by CM or Ambari etc. In the case of CDH 6.3.3 you will use the Cloudera manager to add gateway aka client roles to the remote server so that these gateway/Client or edge node interchangeably is centrally managed by CM which deploys the client software like YARN,zk,hdfs Clients/gateways depending on the language. If you decide to install any client manually then you will have to manually CORE/SITE/MAPRED-SITE.xml's. These files will be overridden if CM managed else it's a vanilla setup quite a headache to manage.
... View more
12-11-2020
03:22 PM
@Yuriy_but The answer is very simple you have logging in as admin user in hue and admin has no HDFS home directory. there are 2-ways delegate the HDFS home directory creation to HUE by checking Create a home directory in HUE Users--->AddSync LDAP User username=admin [search]
Distinguished Name = unchecked
Create home directory= checked or as the HDFS user $ hdfs dfs -mkdir /user/admin Change permissions $ hdfs dfs -chown admin /user/admin Now when you log in HUE you should get any issues please let me know
... View more
12-11-2020
03:00 PM
1 Kudo
@bvishal I see some contradictions in your response "1)Yes, I have entered the 'admin principal' in the same format example/admin@EXAMPLE.AI. in the pop-up window" Yet in "2)Also, I checked the krb5.conf and found a section for my realm (EXAMPLE.COM) inside the [realms] part of the file." You can't have "EXAMPLE.AI and EXAMPLE.COM" as REALMS they are indeed different, Let me walk you through the setup lets assume your REALM is "EXAMPLE.AI" and the FQDN of your host "host1.example.ai" Because the Kerberization has failed and no keytabs have been generated we'll start afresh by deleting the KDC database please use root or sudo in the below walkthrough I have used root. Get the REALM name in your krb5.conf # kdb5_util -r EXAMPLE.AI destroy Desired output Deleting KDC database stored in '/var/kerberos/krb5kdc/principal', are you sure? (type 'yes' to confirm)? yes OK, deleting database '/var/kerberos/krb5kdc/principal'... ** Database '/var/kerberos/krb5kdc/principal' destroyed. By prepping the krb5.conf and kdc.conf will enable you to create the KDC database in silent mode [-s] Edit the current krb5.conf modify /etc/krb5.conf File to look like below [logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = EXAMPLE.AI
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
EXAMPLE.AI = {
kdc = <your_kdc_server _here>
admin_server = <your_kdc_server _here>
}
[domain_realm]
.example.ai = EXAMPLE.AI
example.ai = EXAMPLE.AI At this stage you can now create the KDC database # /usr/sbin/kdb5_util create -s # Modify kdc.conf file to look like below [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
EXAMPLE.AI = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} Desired output Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.AI', master key name 'K/M@EXAMPLE.AI' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: <welcome1> Re-enter KDC database master key to verify:<welcome1> # Assign Administrator Privilege a very important step # vi /var/kerberos/krb5kdc/kadm5.acl Ensure that the KDC ACL file includes an entry so to allow the admin principal to administer the KDC for your realm. The entry should look like below */admin@EXAMPLE.AI * # Create a Principal This is the principal to use when kerberizing in the Ambari UI # kadmin.local -q "addprinc admin/admin" Authenticating as principal root/admin@EXAMPLE.AI with the password. WARNING: no policy specified for admin/admin@EXAMPLE.AI; defaulting to no policy Enter the password for principal "admin/admin@EXAMPLE.AI": Re-enter password for principal "admin/admin@EXAMPLE.AI": Principal "admin/admin@EXAMPLE.AI" created. The above principal created is what you will use the Ambari Kerberos setup UI PRINCIPAL = admin/admin@EXAMPLE.AI
PASSWORD = welcome1 # Start the Kerberos Service Start the KDC server and the KDC admin server enable autoboot at startup by using chkconfig or systemctl # service krb5kdc start Starting Kerberos 5 KDC: [ OK ] # service kadmin start Starting Kerberos 5 Admin Server: [ OK ] # Run Kerberos Ambari wizard it should run successfully using credentials hinted above Done successfully At this stage, your should have your key tags generated in /etc/security/keytabs/* # ls /etc/security/keytabs Hope this gives you light Happy hadooping
... View more