Member since
01-19-2017
3454
Posts
557
Kudos Received
340
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
42 | 01-04-2021 09:47 AM | |
98 | 01-03-2021 01:45 PM | |
92 | 01-01-2021 01:45 PM | |
196 | 12-07-2020 01:54 PM | |
361 | 11-03-2020 03:31 PM |
01-12-2021
11:58 AM
@zetta4ever In a Hadoop cluster, three types of nodes exist Master, Worker and edge nodes. The distinction of roles helps maintain efficiency. Master nodes control which nodes perform which tasks and what processes run on what nodes. The majority of work is assigned to worker nodes. Worker node store most of the data and perform most of the calculations Edge nodes aka gateway facilitate communications from end users to master and worker nodes. The 3 masternodes should have the Namenode[Active & Standby],YARN [Active & Standby], Zookeeper Quorum [3 masters] and the other component you intend to install and on the 6 worker node aka slave nodes you will install the Nodemanager,Datanodes and the all the clients. There is no need to install the client on the master nodes, Some nodes have important tasks, which may impact performance if interrupted. Edge nodes allow end-users to contact worker nodes when necessary, providing a network interface for the cluster without leaving the entire cluster open to communication. That limitation improves reliability and security. As work is evenly distributed between work nodes, the edge node’s role helps avoid data skewing and performance issues. See my document on edge node https://community.cloudera.com/t5/Support-Questions/Edge-node-or-utility-node-packages/td-p/202164# Hope that helps
... View more
01-05-2021
12:35 PM
@sass I just posted a response to a similar question and it should be valid for your case too. Folks are starting to miss Hortonworks right? https://community.cloudera.com/t5/Support-Questions/CDH-Express-edition-be-affected-with-Paywall-subscription/td-p/308786 Happy hadooping !!!! Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, Kudos this answer by hitting the thumbs up button.
... View more
01-05-2021
12:28 PM
@Ninads Here is a community article by @ kramalingam Connecting to Kerberos secured HBase cluster from Java application it's a walkthrough that should give you ideas Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, Kudos this answer by hitting the thumbs up button.
... View more
01-05-2021
12:14 PM
@sass You should get worried if you are using CDH express because once the trial period expires, a valid subscription will be required to continue the use of the software. This blanket change of policy will affect all legacy versions for Cloudera Distribution including Apache Hadoop (CDH), Hortonworks Data Platform (HDP), Data Flow (HDF/CDF), and Cloudera Data Science Workbench (CDSW). Here is a good read from Cloudera and the details of want you should know and expect come January 31, 2021 Paywall Expansion Update Happy hadooping Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, Kudos this answer by hitting the thumbs up button.
... View more
01-05-2021
11:49 AM
@MayankJ Your suspicion is spot on !! Note: Sentry only allows you to grant roles to groups that have alphanumeric characters and underscores (_) in the group name. When Sentry is enabled, you must use Beeline to execute Hive queries. Hive CLI is not supported with Sentry and must be disabled. See Disabling Hive CLI for information on how to disable the Hive CLI. The GRANT ROLE statement can be used to grant roles to groups and Only Sentry admin users can grant roles to a group. Create a role CREATE ROLE datascientist;
GRANT ROLE datascientist TO GROUP gurus; Grant to the database test GRANT ALL ON DATABASE test TO ROLE datascientist; Grant to a table lesson in test database GRANT ALL ON TABLE test.lesson TO ROLE datascientist; The reason Sentry grants ROLES to GROUPS is logic to simplifies management where you bundle privileges and grant it to a group so if that the only moving part is the user, so the below statement will effectively disable mayankj's grants to do anything privileges the datascientist roles # gpasswd -d mayankj gurus Removing user mayankj from group datascientist Quite simple and effective Roles are created to group together privileges or other roles. They are a means of facilitating the granting of multiple privileges or roles to groups. Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, kudos this answer by hitting the thumbs up button.
... View more
01-05-2021
11:06 AM
@saivenkatg55 My Assumptions You already executed the HDP environment preparation. If not see prepare the environment https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/prepare_the_environment.html You are running on Linux [RedHat, Centos] and you have root access! Note: Replace test.ambari.com with the output of your $ hostname -f Re-adapt to fit your cluster # root password = welcome1
# hostname = test.ambari.com
# ranger user and password is the same Steps Install the MySQL connector if not installed [Optional] # yum install -y mysql-connector-java Shutdown Ambari # ambari-server stop Re-run the below command it won't hurt # ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Backup the ambari server properties file # cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties.bak Change the timeout of the ambari server # echo 'server.startup.web.timeout=120' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.acquisition-size=5' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-age=0' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-idle-time=14400' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-idle-time-excess=0' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.idle-test-interval=7200' >> /etc/ambari-server/conf/ambari.properties Recreate a new ranger schema & Database # mysql -u root -pwelcome1
CREATE USER 'rangernew'@'%' IDENTIFIED BY 'rangernew';
GRANT ALL PRIVILEGES ON *.* TO 'rangernew'@'localhost';
CREATE USER 'rangernew'@'%' IDENTIFIED BY 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'%';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'localhost' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON rangernew.* to 'rangernew'@'localhost' identified by 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* to 'rangernew'@'test.ambari.com' identified by 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'test.ambari.com';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
quit; Create the new ranger database # mysql -u rangernew -prangernew
create database rangernew;
show databases;
quit; Start the ambari server # ambari-server start
......Desired output.........
..................
.................
Ambari Server 'start' completed successfully. For ranger Ambari UI setup Use the hostname in this example test.ambari.com and the corresponding passwords Test the Ranger DB connectivity The connection test should succeed if it does then you can now start Ranger successfully. Drop the old Ranger DB # mysql -u root -pwelcome1
mysql> Drop database old_Ranger_name; The above steps should resolve your Ranger issue. Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, Kudos this answer by hitting the thumbs up button.
... View more
01-04-2021
12:55 PM
@ibrahima This community helps in 2 of the most used Hadoop flavors Cloudera and Hortonworks and these 2 software vendors handled and configured differently their Kerberos. In cloudera the keytabs are found in /run/cloudera-scm-agent/process/* while in hortonworks it's in /etc/security/keytabs/* so it would be good if you clearly stated. Please include the description of your cluster too like HA or not I see from the log failover to rm16 which suggest you have RM HA? Has the user kinited before attempting the operation. Is user impersonating cabhbwg Happy hadooping
... View more
01-04-2021
09:58 AM
@HoldYourBreath To add to @GangWar answer Azure is your best bet as you want to install Oracle VirtualBox and import your Cloudera Quickstart VM image. Don't forget to set up a Windows 10 with at most 16GB with enough CPU's and remember to set up auto-shutdown to avoid extra costs when your VM isn't running Create-windows-virtual-machine-in-azure How to install windows 10 in Azure Hope this information is useful Happy hadooping
... View more
01-04-2021
09:47 AM
@Mondi The simple answer is YES and the best source is the vendor itself Rack awareness CDP as computations are performed with the assistance of rack awareness scripts. Hope that helps Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, Kudos this answer by hitting the thumbs up button.
... View more
01-03-2021
03:36 PM
1 Kudo
@bvishal Sorry was away for a while 1) Yes, I have entered the 'admin principal' in the same format example/admin@EXAMPLE.AI. in the pop-up window. Somehow I feel your values are not correct in the ambari wizard you should enter either root/admin@EXAMPLE.AI admin/admin@EXAMPLE.AI depending on the teh value you gave when adding the admin principal when you rûn initially the kadmin.local 2) Also, I checked the krb5.conf and found a section for my realm (EXAMPLE.COM) inside the [realms] part of the file. The above part in the krb5.conf is wrong it should be EXAMPLE.AI Sample of /etc/krb5.conf' [libdefaults] default_realm = EXAMPLE.AI dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = true udp_preference_limit = 1000000 default_tkt_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 default_tgs_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 permitted_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 [realms] EXAMPLE.AI = { kdc = kdc.EXAMPLE.AI admin_server = kdc.EXAMPLE.AI default_domain = EXAMPLE.AI } [domain_realm] .example.ai = EXAMPLE.AI example.ai = EXAMPLE.AI [logging] kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmin.log default = FILE:/var/log/krb5lib.log Replace all occurences of EXAMPLE.COM with EXAMPLE.AI in the kdc.conf and kadm5.acl Please let me know if you still need help
... View more
01-03-2021
02:49 PM
@HoldYourBreath I now see what's happening you need the start the CM and all the roles on the Quickstart VM before you can connect successfully through HUE I also think you are really short on memory as you can see the Cloudera Express needs 8GB of memory and 2 CPU while the Cloudera Enterprise needs at least 10GB and 2 CPU's you can see the highlighted parts. I would advise you to spin up a Windows 10 VM in Azure and use that for your learning beware Cloudera no longer provides access to Quickstart you have CDP trial!! Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, kudo this answer by hitting the thumbs up button.
... View more
01-03-2021
01:48 PM
@rohit_r_sharma Can you share the syntax for the topic creation? Is you cluster kerberized? Is your Ranger Kafka plugin enabled? Please response and tag me!
... View more
01-03-2021
01:45 PM
1 Kudo
@bvishal You don't really need to mix 2 different databases [Postresql and MySQL]. You can use MySQL or MariaDB free version of MYSQL in the advanced database configuration for you cluster. MySQL has the typical SQL syntax Postgresql is another world ! You don't need to install MySQL on the ambari agent hosts because that will mean if you have 20 nodes you will be running 20 MySQL/MariaDB databases. Usually, you install the MySQL/MariaDB on the Ambari host and you create apart from Ambari database, hive,oozie, ranger,RangerKMS etc. If you are deploying using Ambari then the ambari-agents are deployed automatically and configured by Ambari. Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, kudos this answer by hitting the thumbs up button.
... View more
01-03-2021
12:17 PM
@PauloNeves Yes, the command show databases will list all databases in a Hive instance whether you are authorized to access it or not. I am sure this is cluster devoid of Ranger or Sentry which are the 2 authorization tools in Cloudera!!! Once the ranger plugin is enabled then authorization is delegated to Ranger to provide fine-grained data access control in Hive, including row-level filtering and column-level masking. This is the recommended setting to make your database administration easier as it provides a centralized security administration, access control, and detailed auditing for user access within the Hadoop, Hive, HBase, and other components in the ecosystem. Unfortunately, I had already enabled the Ranger plugin for hive on my cluster but all the same, it confirms what I wrote above. Once the ranger plugin is enabled for a component ie. hive,HBase or Kafka then the authorization is managed exclusively through Ranger Database listing before Ranger Below is what happens if my user sheltong has not explicitly been given authorization through Ranger, see [screenshots] I see no database though I have over 8 databases See the output of the hive user who has explicit access to all the tables due to the default policy he could see the databases. Database listing after Ranger After creating a policy explicitly giving the user sheltong access to the 3 databases Policy granting explicit access to 3 databases Now when I re-run the show databases bingo! Back to your question show tables from forbidden_db, it returns an empty list, this can be true especially if the database is empty! has not table like the screenshot below though I have access to the database it's empty Now I create a table and re-run the select now I am able to see the table I hope this demonstrates the power of Ranger and explains maybe what you are encountering, I am also thinking if your cluster has Ranger hive plugin enabled you could have select on the databases but you will need explicit minimum select or the following permission on the underlying database tables to be able to see them. Happy Hadooping
... View more
01-03-2021
03:18 AM
@nishant2305 Can you share the walkthrough of your setup? generation of cert using tls toolkit? Just wondering is this host existing ?? ldap://ldap_hostname:389 And the associated LDIF dc=example,dc=org cn=admin,dc=example,dc=org Please revert
... View more
01-02-2021
04:07 PM
1 Kudo
@Chahat_0 Hadoop is designed to ensure that compute (Node Managers) runs as close to data (Data Nodes) as possible. Usually containers for jobs are allocated on the same nodes where the data is present. Hence in a typical Hadoop cluster, both Data Nodes and Node Manager run on the same machine. Node Manager is the RM slave process while the Data Nodes is the Namenode slave process which responsible for coordinating HDFS functions Resource Manager: Runs on a master daemon and manages the resource allocation in the cluster. Node Manager: They run on the slave daemons and are responsible for the execution of a task on every single Data Node Node Managers manage the containers requested by jobs Data Nodes manage the data The NodeManager (NM) is YARN’s per-node agent and takes care of the individual compute nodes in a Hadoop cluster. This includes keeping up-to-date with the ResourceManager (RM), overseeing containers’ life-cycle management; monitoring resource usage (memory, CPU) of individual containers, tracking node-health, log’s management, and auxiliary services that may be exploited by different YARN applications. NodeManager communicates directly with the ResourceManager. Resource manager and Namenode both as master components [processes] that can run in single or HA setup should run on separate identical usually high spec servers [nodes] as compared to the data nodes. Zookeeper is another important component ResourceManager and NodeManager combine together to form a data-computation framework. ResourceManager acts as the scheduler and allocates resources amongst all the applications in the system. NodeManager takes navigation from the ResourceManager and it runs on each node in the cluster. Resources available on a single node is managed by NodeManager. ApplicationMaster, a framework-specific library is responsible for running specific YARN job and for negotiating resources from the ResourceManager, and working with NodeManager to execute and monitor containers. Hope that helps
... View more
01-02-2021
03:37 PM
1 Kudo
@HoldYourBreath I downloaded a CDH quickstart VM and imported it into my new Oracle Virtualbox 6.1 please find attached screenshots to show you my configs. 01.JPG-->is the network setup Adapter1 is Bridged Adapter and Adapter2 is NAT 02.JPG-->Bridged Adapter details 02b.JPG-->Memory setting I gave my quickstart Sandbox 16 GB 2 CPU my host has 32 GB 4CPU I started the quickstart Sandbox and I was presented with the Classic UI 03.JPG-->CDH quickstart default sandbox UI 04.JPG-->Clicked to console see the arrow and run ifconfig , clearly it picked Bridged Adapter class C 192.168.x IP from my LAN and the default 10.x IP 05.JPG--> The host's host file entry with the FQDN though I used the IP to 05b.JPG--> Combined UI's to show VM, host file, and Chrome CM opened on port 7180 06.JPG--> Started these default quickstart roles 07.JPG--> Roles all running OK 08.JPG--> Detail of the HDFS overview on port 50070. I didn't do any changes to the FW etc 09.JPG--> File browser 10.JPG--> HUE UI on port 8888 11.JPG--> Files/Docs in HUE browser I didn't encounter the same problems as you but I wanted to remind you also that ensure you have enough memory to allocate your sandbox. 01.JPG 02.JPG 02b.JPG 03.JPG 04.JPG 05.JPG 05b.JPG 06.JPG 07.JPG 08.JPG 09.JPG 10.JPG 11.JPG
... View more
01-01-2021
02:05 PM
@prasanna06 Your problem resembles this one check your cluster UI to ensure that workers are registered and have sufficient resources Happy hadooping
... View more
01-01-2021
02:00 PM
@chhaya_vishwaka Can you confirm you went through all the Prerequisites for adding classic clusters and checked against the Cloudera Support Matrix. ?? Please revert
... View more
01-01-2021
01:53 PM
@bvishal I provided an answer to such a situation Ambari MySQL database lost please have a look at it and see if that resolves you problem it did for someone in a similar situation. Happy Hadooping
... View more
01-01-2021
01:45 PM
1 Kudo
@brunokatekawa What is happening if my guess is right is you are trying to use your community username/password this will definitely fail. Ambari 2.7.x is available for companies with valid HDP 3.x support licenses you have an active subscription with Cloudera as you can see below access is denies as I used my community login. Here is the HDP support Matrix Starting with the HDP 3.1.5 release, access to HDP repositories requires authentication. To access the binaries, you must first have the required authentication credentials ( username and password ). Read accessing HDP repositories Hope that helps
... View more
12-19-2020
03:53 PM
@Sud Your question isn't detailed. What sort of access are you thinking of to restrict as read-only data or UI? For Ambari you have the Cluster User role which is a read-only for its services, including configurations, service status, and health alerts. Then the other is about reading data in HDFS where you can use HDFS ACL's which is POSIX compliant like rwx but that won't work for Hive tables. You should know that Ranger controls authorization for the following HDFS,Hive,HBase,Kafka,Knox,YARN,Storm,Atlas and other components depending oon the software HDP,CDH or CDP. Happy hadooping
... View more
12-16-2020
02:28 PM
1 Kudo
@mike_bronson7 To achieve your goal for the 2 issues you will need to edit server.properties of Kafka to add the following line. auto.leader.rebalance.enable = false Then run the below assuming you are having a zookeeper quorum of host1;host2,host3 bin/kafka-preferred-replica-election.sh --zookeeper host1:2181,host2:2181,host3:2181/kafka This should balance your partitions you can validate with bin/kafka-topics.sh --zookeeper host1:2181,host2:2181,host3:2181/kafka --describe For the second issue with the lost broker, you need to create a new broker and update the broker.id with the previous broker's id which was not gone or not recoverable then run $ kafka-preferred-replica-election.sh to balance the topics.
... View more
12-14-2020
12:05 PM
@bvishal You are surely doing something wrong. Kerberzing should take you that long. Follow my previous document and recreate the KDC database by destroying the actual. and share with me the krb5.conf,kadm5.acl, and kdc.conf You are not executing the correct command it's supposed to be # kadmin.local And not # kadmin Happy hadooping
... View more
12-14-2020
01:11 AM
@bvishal You should execute kadmin as root user or with sudo # kadmin Hope that helps
... View more
12-13-2020
01:15 AM
@hanu Can you be precise what platform CDH/CDP or HDP and it's version also confirm whether it's kerberized or not? The more info you give the better
... View more
12-12-2020
07:50 AM
@rampradeep_ All servers in a cluster should be managed by CM or Ambari etc. In the case of CDH 6.3.3 you will use the Cloudera manager to add gateway aka client roles to the remote server so that these gateway/Client or edge node interchangeably is centrally managed by CM which deploys the client software like YARN,zk,hdfs Clients/gateways depending on the language. If you decide to install any client manually then you will have to manually CORE/SITE/MAPRED-SITE.xml's. These files will be overridden if CM managed else it's a vanilla setup quite a headache to manage.
... View more
12-11-2020
03:22 PM
@Yuriy_but The answer is very simple you have logging in as admin user in hue and admin has no HDFS home directory. there are 2-ways delegate the HDFS home directory creation to HUE by checking Create a home directory in HUE Users--->AddSync LDAP User username=admin [search]
Distinguished Name = unchecked
Create home directory= checked or as the HDFS user $ hdfs dfs -mkdir /user/admin Change permissions $ hdfs dfs -chown admin /user/admin Now when you log in HUE you should get any issues please let me know
... View more
12-11-2020
03:00 PM
1 Kudo
@bvishal I see some contradictions in your response "1)Yes, I have entered the 'admin principal' in the same format example/admin@EXAMPLE.AI. in the pop-up window" Yet in "2)Also, I checked the krb5.conf and found a section for my realm (EXAMPLE.COM) inside the [realms] part of the file." You can't have "EXAMPLE.AI and EXAMPLE.COM" as REALMS they are indeed different, Let me walk you through the setup lets assume your REALM is "EXAMPLE.AI" and the FQDN of your host "host1.example.ai" Because the Kerberization has failed and no keytabs have been generated we'll start afresh by deleting the KDC database please use root or sudo in the below walkthrough I have used root. Get the REALM name in your krb5.conf # kdb5_util -r EXAMPLE.AI destroy Desired output Deleting KDC database stored in '/var/kerberos/krb5kdc/principal', are you sure? (type 'yes' to confirm)? yes OK, deleting database '/var/kerberos/krb5kdc/principal'... ** Database '/var/kerberos/krb5kdc/principal' destroyed. By prepping the krb5.conf and kdc.conf will enable you to create the KDC database in silent mode [-s] Edit the current krb5.conf modify /etc/krb5.conf File to look like below [logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = EXAMPLE.AI
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
EXAMPLE.AI = {
kdc = <your_kdc_server _here>
admin_server = <your_kdc_server _here>
}
[domain_realm]
.example.ai = EXAMPLE.AI
example.ai = EXAMPLE.AI At this stage you can now create the KDC database # /usr/sbin/kdb5_util create -s # Modify kdc.conf file to look like below [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
EXAMPLE.AI = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} Desired output Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.AI', master key name 'K/M@EXAMPLE.AI' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: <welcome1> Re-enter KDC database master key to verify:<welcome1> # Assign Administrator Privilege a very important step # vi /var/kerberos/krb5kdc/kadm5.acl Ensure that the KDC ACL file includes an entry so to allow the admin principal to administer the KDC for your realm. The entry should look like below */admin@EXAMPLE.AI * # Create a Principal This is the principal to use when kerberizing in the Ambari UI # kadmin.local -q "addprinc admin/admin" Authenticating as principal root/admin@EXAMPLE.AI with the password. WARNING: no policy specified for admin/admin@EXAMPLE.AI; defaulting to no policy Enter the password for principal "admin/admin@EXAMPLE.AI": Re-enter password for principal "admin/admin@EXAMPLE.AI": Principal "admin/admin@EXAMPLE.AI" created. The above principal created is what you will use the Ambari Kerberos setup UI PRINCIPAL = admin/admin@EXAMPLE.AI
PASSWORD = welcome1 # Start the Kerberos Service Start the KDC server and the KDC admin server enable autoboot at startup by using chkconfig or systemctl # service krb5kdc start Starting Kerberos 5 KDC: [ OK ] # service kadmin start Starting Kerberos 5 Admin Server: [ OK ] # Run Kerberos Ambari wizard it should run successfully using credentials hinted above Done successfully At this stage, your should have your key tags generated in /etc/security/keytabs/* # ls /etc/security/keytabs Hope this gives you light Happy hadooping
... View more
12-11-2020
05:26 AM
@bvishal I am wondering what your input was in the initial pop up but your admin principal should look like admin/admin@REALM
root/admin@REALM The REAL should be already generated in your krb5.conf or your kadm.acl should give you a clue. Please let me know
... View more