Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1017 | 06-04-2025 11:36 PM | |
| 1575 | 03-23-2025 05:23 AM | |
| 790 | 03-17-2025 10:18 AM | |
| 2850 | 03-05-2025 01:34 PM | |
| 1867 | 03-03-2025 01:09 PM |
10-10-2019
07:19 AM
@vsrikanth9 Not exactly now the REALM part was wrong again the rest are okay you substituted the wrong values here is how it's supposed to be you see the highlighted part Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt default_realm = HADOOPSECURITY.COM default_ccache_name = KEYRING:persistent:%{uid} [realms] HADOOPSECURITY.COM = { kdc = p1.bigdata.com admin_server = p1.bigdata.com } [domain_realm] .hadoopsecurity.com = HADOOPSECURITY.COM hadoopsecurity.com = HADOOPSECURITY.COM Do that and let me know the KDC and Admin server are usually the same 🙂
... View more
10-10-2019
01:22 AM
@irfangk1 There are a couple of things to do first check your disk space and inode usage. To rule out permissions, there a are the listings for the relevant directory: $ ls -la /var/lib/mysql Can you share /var/log/mysql/mysql.log What the value of innodb-buffer-pool size in the config file /etc/mysql/my.cnf can you edit the my.cnf by adding [mysqld] innodb_force_recovery = 1 And then running: sudo systemctl start mysql Hope that helps
... View more
10-09-2019
11:11 PM
@vsrikanth9 1.Your KDC part of the screenshot has an error 🙂 in the domains part just copy and paste the below as is to replace p1.bigdata.com noe the dot(.) and comma separating the names .hadoopsecurity.com,hadoopsecurity.com The validation passed because in reality it only test the connectivity ONLY to the KDC server 2. And then the Kadmin part the Admin principal should be the output of your # kadmin.local Something like admin/admin@hadoopsecurity.com or root/admin@hadoopsecurity.com What ever you chose during the installation of Kerberos after that then launch the recreation of the keytabs and all should be okay. Make sure the KDC server is up and running during this process. Please revert
... View more
10-09-2019
12:21 PM
@vsrikanth9 Your krb5.conf entry is wrong please change it to match the below [domain_realm] .hadoopsecurity.com = HADOOPSECURITY.COM hadoopsecurity.com = HADOOPSECURITY.COM The restart the kdc and kadmin # systemctl start krb5kdc.service
# systemctl start kadmin.service That should resolve your problem Happy hadooping
... View more
10-08-2019
11:23 PM
@irfangk1 There is something wrong I don't see the database entry in your ambari.server.properties how will it bind ? Something like below server.jdbc.url=jdbc:postgresql://<HOSTNAME>:<PORT>/ambari?ssl=true Can you validate, you are using the embedded postgres right?
... View more
10-08-2019
07:54 PM
@Splash The problem you are facing is well known with nifi "There was an issue decrypting protected properties" It seems you can't decrypt the password in the nifi.properties have a look at this link nifi .properties especially read carefully 3. Setting-up/Migrating encryption key you might need to run encrypt-config.sh script Please let me know if you need more help
... View more
10-08-2019
01:42 PM
1 Kudo
@Gcima009 When you generate templates in NiFi, they are stripped of all encrypted values. When importing those templates into another NiFi cluster, Check your node that is not starting? has any values in the below parameters? You will have to populate all the processor and controller tasks passwords manually. Backing up flow.xml.gz or flow.tar file will capture the entire flow exactly as it is, encrypted sensitive passwords and all. NiFi will not start if it cannot decrypt these encrypted sensitive properties contained in the flow.xml. When sensitive properties e.g passwords are added they are encrypted using these settings from your nifi.properties file: # security properties # nifi.sensitive.props.key= nifi.sensitive.props.key.protected= nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL nifi.sensitive.props.provider=BC nifi.sensitive.props.additional.keys= In order to drop your entire flow.xml.gz or flow.tar onto another clean NiFi, these values must all match exactly. Ref: http://www.contemplatingdata.com/2017/08/28/apache-nifi-sensitive-properties-need-know/
... View more
10-08-2019
01:14 PM
1 Kudo
@marcusvmc ROOT use is not a normal HDP user but an OS superuser used to escalate privileges to do some changes on the host level. The hbase super user is hbase just like hdfs 🙂 Ranger reads the /etc/passwd and /etc/group and ONLY loads (syncs) users /groups whos id is > 500 If you want to trick Ranger to sync root who's id is root:x:0:0:root:/root:/bin/bash then you have to tweak the minimum user ID below Procedure Configure Ranger user sync for UNIX: On the Ranger Customize Services page, select the Ranger User Info tab. Click Yes under Enable User Sync. Use the Sync Source drop-down to select UNIX, then set the following properties: Table 1. UNIX user sync propertiesProperty Description Default value Minimum user ID Only sync users above this user ID. 500 Password file The location of the password file on the Linux server. /etc/passwd Group file The location of the groups file on the Linux server. /etc/group Question: Why would you want root user rights managed by Ranger? Use sudo if you want to impersonate root I hope that helps !!
... View more
10-07-2019
02:16 AM
@irfangk1 Can you share your ambari.server. properties and your /etc/hosts
... View more
10-06-2019
09:07 AM
1 Kudo
@ThanhP Good everything is perfect for you now 🙂 You ONLY execute sudo -u hdfs hdfs namenode -format as a last resort because it's dangerous and not recommended to run that on production cluster as that [re-initializes] formats your Namenode hence deleting all your metadata stored on the NameNode. Having said that the answer you accepted can't help a member who encounters the same issue "HDFS NameNode won't leave safemode" maybe you should un-accept it and accept your own answer as it's the more realistic answer . Happy hadooping
... View more