Member since
01-19-2017
3681
Posts
633
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1618 | 06-04-2025 11:36 PM | |
| 2075 | 03-23-2025 05:23 AM | |
| 986 | 03-17-2025 10:18 AM | |
| 3758 | 03-05-2025 01:34 PM | |
| 2587 | 03-03-2025 01:09 PM |
01-07-2019
03:34 PM
1 Kudo
@choppadandi vamshi krishna After successfully running the --mpack you will need to go to the bottom left of the Ambari UI and click on Stack and Version, for illustration see the attached screenshot. Nifi and Nifi Registry should be available for installation. You should be able to proceed with the nifi setup and choose whether you want it clustered or a single node. In my example, I added a 6 node cluster to an existing HDP cluster. Follow the screen flow at the end of the installation and restart of the nifi services you should see all the Nifi node in your Ambari UI. I would advise you don't install the Certificate authority which will mean all login will be anonymous. Setting the Certificates needs access to the AD or creating the first Admin user who is the Nif superuser to create and grant privileges in Nifi HTH
... View more
01-07-2019
09:35 AM
@huzaira bashir Can you share the corresponding to my screenshots photo5 and 6 , I built a VM to test your case and documented all the steps over the weekend and I am surprised it can't work for you. Is Java Cryptography Extension (JCE) installed, check the syntax below please adjust your jdk_home accordingly # zipgrep CryptoAllPermission /usr/jdk64/jdk1.8.0_112/jre/lib/security/local_policy.jar The desired output should be default_local.policy: permission javax.crypto.CryptoAllPermission;
... View more
01-05-2019
10:41 PM
@huzaira bashir Please follow the steps and update this thread, I am sure there is a step you missed follow page by page. On your screenshot I didn't see the Domain
... View more
01-05-2019
10:39 PM
@huzaira bashir Please find a complete process of the kerberization process
... View more
01-04-2019
03:12 PM
1 Kudo
@harish Yes for sure, that's doable I am assuming you have set up 2 kdc's on different networks but accessible to the cluster, Assumptions: You MUST have successfully configure the 2 master and slave KDC's my realm =REALM
Master host=master-kdc.test.com
Slave host=slave-kdc.test.com Contents of /var/kerberos/krb5kdc/kpropd.acl: host/master-kdc.test.com@REALM
host/slave-kdc.test.com@REALM # Create the configuration for kpropd on both the Master and Slave KDC hosts: # Create /etc/xinetd.d/krb5_prop with the following contents. service krb_prop
{
disable = no
socket_type = stream
protocol = tcp
user = root
wait = no
server = /usr/sbin/kpropd
} # Configure xinetd to run as a persistent service on both the Master and Slave KDC hosts: # systemctl enable xinetd.service
# systemctl start xinetd.service # Copy the following files from the Master KDC host to the Slave KDC host: /etc/krb5.conf
/var/kerberos/krb5kdc/kadm5.acl
/var/kerberos/krb5kdc/kdc.conf
/var/kerberos/krb5kdc/kpropd.acl
/var/kerberos/krb5kdc/.k5.REALM # Perform the initial KDC database propagation to the Slave KDC: # kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans
# kprop -f /usr/local/var/krb5kdc/slave_datatrans slave-kdc.REALM # Start the Slave KDC : # systemctl enable krb5kdc
# systemctl start krb5kdc # Script to propagate the updates from the Master KDC to the Slave KDC. Create a cron job, or the like, to run this script on a frequent basis. #!/bin/sh
#/var/kerberos/kdc-slave-propogate.sh
kdclist = "slave-kdc.customer.com"
/sbin/kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans
for kdc in $kdclist
do
/sbin/kprop -f /usr/local/var/krb5kdc/slave_datatrans $kdc
done How to test the KDC HA is to shut down the master KDC as start the slave KDC note both KDC's should NEVER be running at the same time, the crontab script should do the propagation of all changes in the KDC database in the master to the slave. CAUTION Run the kprop before shutting down the master KDC then to test the kdc HA log on to the cluster linux CLI follow the below steps my steps I am using the root user Switch user to hive/spark/Yarn etc # su - hive Check if the hive user still has valid Kerberos ticket The below output shows the hive user still has a valid ticket $ klist
Ticket cache: FILE:/tmp/krb5cc_507
Default principal: hdfs-host1@{REALM}
Valid starting Expires Service principal
12/28/16 22:57:11 12/29/16 22:57:11 krbtgt/{REALM}@{REALM} renew until 12/28/16 22:57:11
12/28/16 22:57:11 12/29/16 22:57:11 HTTP/host1.test.com@{REALM} renew until 12/28/16 22:57:11
12/28/16 22:57:11 12/29/16 22:57:11 HTTP/host1.com@{REALM} renew until 12/28/16 22:57:11 # Destroy the Kerberos tickets as user hive $ kdestroy Running the previous command shouldn't give you any lines, now try getting a valid ticket by running the following command format {kinit -kt $keytab $principal} $ kinit -kt /etc/security/keytabs/hive.keytab {PRINCIPAL} Repeating the klist should give the hive user a valid ticket this will validate that the HA is functioning well.
... View more
01-04-2019
02:15 PM
@huzaira bashir At least I am reassured about the previous screenshot. From the screenshot, I don't see domain which should be in the format and comma separated if your REALM is TEST.COM note the dot(.) .test.com,test.com And the Kadmin too, meanwhile can you share a tokenized version of you krb5.conf,kdc.conf and kadm5.acl most important ensure these 2 daemons are running Enable auto start # systemctl enable krb5kdc
# systemctl enable kadmin Start the daemons # /etc/rc.d/init.d/krb5kdc start
# /etc/rc.d/init.d/kadmin start or # systemctl start krb5kdc
# systemctl start kadmin Whichever is applicable HTH
... View more
01-03-2019
05:00 PM
@huzaira bashir What is the HDP version? The screenshot doesn't look a typical MIT Kerberos enabling UI? Could you be using the AD as KDC? Having said that can you share how you procedure used? Can you share the Kerberos enabling screenshots from Ambari? If you could answer promptly with the above info then it would help a great deal. HTH
... View more
01-01-2019
10:38 PM
@john y Can you use http:// localhost:8080 instead of www !
... View more
05-11-2018
04:55 AM
2 Kudos
Iy you have deployed and secured your multi-node-cluster with an MIT KDC running on a Linux box (dedicated or not), this can also be applied on a single node cluster. Below is a step by step procedure to grant a group of user(s) on the Edge node with access to services in the cluster. Assumption KDC is running KDC is created KDC user and a master password is available REALM: DEV.COM Users: user1 to user5 Edge node: for users Kerberos Admin user is root or sudoer A good solution security-wise is to copy the generated keytabs to that users' home directory. If these are local Unix users NOT Active directory then create the keytabs in e.g /tmp and later copy them to their respective home directories and make sure to change the correct permissions on the keytabs. A good practice is to ensure a node dedicated to users usually called an EDGE NODE all client software is installed here and not on the Data or Name Nodes! Change directory to tmp # cd /tmp If you have root access, no need for sudo, specify the password for user1 # sudo kadmin.local
Authenticating as principal root/admin@DEV.COM with password.
kadmin.local: addprinc user1@DEV.COM
WARNING: no policy specified for user1@DEV.COM; defaulting to no policy
Enter password for principal "user1@DEV.COM":
Re-enter password for principal "user1@DEV.COM":
Principal "user1@DEV.COM" created. Do the above step for all the new users addprinc user2@DEV.COM
addprinc user3@DEV.COM
addprinc user4@DEV.COM
addprinc user5@DEV.COM The keytabs with be generated in the current directory Generate keytab for user1 The keytab will be generated in the current directory # sudo ktutil
ktutil: addent -password -p user1@DEV.COM -k 1 -e RC4-HMAC
Password for user1@DEV.COM:
ktutil: wkt user1.keytab
ktutil: q You MUST repeat the above for all the 5 users Copy the newly created keytab to the user's home directory, in this example I have copied the keytab to /etc/security/keytabs # cp user1.keytab /etc/security/keytabs Change ownership & permission here user1 belongs to hadmin group # chown user1:hadmin user1.keytab Again do the above for all the other users. A good technical and security best practice is to copy the keytabs from the kdc to edge node respective home directories and change the ownership of the keytabs Validate the principals in this example the keytabs are in /etc/security/keytabs # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
----------- ------------------- ------------------------------------------------------
1 05/10/2018 10:46:27 user1@DEV.COM To ensure successful ticket attribution the newly created user should validate the principal. See example below and use it grab a ticket, the principal will be concatenated with the keytab when running the kinit # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
-------- ------------------------ --------------------------------------------------------
1 05/10/18 01:00:50 user1@DEV.COM
.... .................. ..............
1 05/10/18 01:00:50 user1@DEV.COM Test the new user1 should try grabbing a Kerberos ticket (keytab + principal) # kinit -kt /etc/security/keytabs/user1.keytab user1@DEV.COM The below command should show the validity of the Kerberos ticket # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: user1@DEV.COM
Valid starting Expires Service principal
05/10/2018 10:53:48 05/11/2018 10:53:48 krbtgt/DEV.COM@DEV.COM You should be okay now access and successfully run jobs on the cluster see example Accessing Hive CLI with Kerberos ticket $ hive
2018-05-10 23:18:57 WARN [main] conf.HiveConf: HiveConf of name hive.custom-extensions.root does not exist
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.2.0-205/0/hive-log4j.properties
hive> show databases;
OK
default
Time taken: 8.525 seconds, Fetched: 1 row(s) Success !! Accessing Hive without a Kerberos ticket¨ Destroy the Kerberos ticket $ kdestroy Validate the existence of absence of Kerberos ticket $ klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_1001) Accessing Hive CLI should fail
... View more
Labels:
03-15-2018
02:43 PM
@Mudassar Hussain I am positive that command should and will work without fail if you have successfully created a snapshottable directory. Its a sub command of hdfs can you simply run hdfs a the hdfs user? $ hdfs Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
classpath prints the classpath
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
.......
........
snapshotDiff diff two snapshots of a directory or diff the current directory contents a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
....
Most commands print help when invoked w/o parameters. Now once you have confirmed the above run as below # su - hdfs
$ hdfs lsSnapshottableDir
output ..........................
drwxr-xr-x 0 mudassar hdfs 0 2018-03-15 10:38 1 65536 /user/mudassar/snapdemo That the directory I created to reproduce your issues on my cluster.
... View more
- « Previous
- Next »