Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 553 | 06-04-2025 11:36 PM | |
| 1097 | 03-23-2025 05:23 AM | |
| 561 | 03-17-2025 10:18 AM | |
| 2102 | 03-05-2025 01:34 PM | |
| 1316 | 03-03-2025 01:09 PM |
10-09-2019
12:21 PM
@vsrikanth9 Your krb5.conf entry is wrong please change it to match the below [domain_realm] .hadoopsecurity.com = HADOOPSECURITY.COM hadoopsecurity.com = HADOOPSECURITY.COM The restart the kdc and kadmin # systemctl start krb5kdc.service
# systemctl start kadmin.service That should resolve your problem Happy hadooping
... View more
10-08-2019
11:23 PM
@irfangk1 There is something wrong I don't see the database entry in your ambari.server.properties how will it bind ? Something like below server.jdbc.url=jdbc:postgresql://<HOSTNAME>:<PORT>/ambari?ssl=true Can you validate, you are using the embedded postgres right?
... View more
10-08-2019
07:54 PM
@Splash The problem you are facing is well known with nifi "There was an issue decrypting protected properties" It seems you can't decrypt the password in the nifi.properties have a look at this link nifi .properties especially read carefully 3. Setting-up/Migrating encryption key you might need to run encrypt-config.sh script Please let me know if you need more help
... View more
10-08-2019
01:42 PM
1 Kudo
@Gcima009 When you generate templates in NiFi, they are stripped of all encrypted values. When importing those templates into another NiFi cluster, Check your node that is not starting? has any values in the below parameters? You will have to populate all the processor and controller tasks passwords manually. Backing up flow.xml.gz or flow.tar file will capture the entire flow exactly as it is, encrypted sensitive passwords and all. NiFi will not start if it cannot decrypt these encrypted sensitive properties contained in the flow.xml. When sensitive properties e.g passwords are added they are encrypted using these settings from your nifi.properties file: # security properties # nifi.sensitive.props.key= nifi.sensitive.props.key.protected= nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL nifi.sensitive.props.provider=BC nifi.sensitive.props.additional.keys= In order to drop your entire flow.xml.gz or flow.tar onto another clean NiFi, these values must all match exactly. Ref: http://www.contemplatingdata.com/2017/08/28/apache-nifi-sensitive-properties-need-know/
... View more
10-08-2019
01:14 PM
1 Kudo
@marcusvmc ROOT use is not a normal HDP user but an OS superuser used to escalate privileges to do some changes on the host level. The hbase super user is hbase just like hdfs 🙂 Ranger reads the /etc/passwd and /etc/group and ONLY loads (syncs) users /groups whos id is > 500 If you want to trick Ranger to sync root who's id is root:x:0:0:root:/root:/bin/bash then you have to tweak the minimum user ID below Procedure Configure Ranger user sync for UNIX: On the Ranger Customize Services page, select the Ranger User Info tab. Click Yes under Enable User Sync. Use the Sync Source drop-down to select UNIX, then set the following properties: Table 1. UNIX user sync propertiesProperty Description Default value Minimum user ID Only sync users above this user ID. 500 Password file The location of the password file on the Linux server. /etc/passwd Group file The location of the groups file on the Linux server. /etc/group Question: Why would you want root user rights managed by Ranger? Use sudo if you want to impersonate root I hope that helps !!
... View more
10-07-2019
02:16 AM
@irfangk1 Can you share your ambari.server. properties and your /etc/hosts
... View more
10-06-2019
09:07 AM
1 Kudo
@ThanhP Good everything is perfect for you now 🙂 You ONLY execute sudo -u hdfs hdfs namenode -format as a last resort because it's dangerous and not recommended to run that on production cluster as that [re-initializes] formats your Namenode hence deleting all your metadata stored on the NameNode. Having said that the answer you accepted can't help a member who encounters the same issue "HDFS NameNode won't leave safemode" maybe you should un-accept it and accept your own answer as it's the more realistic answer . Happy hadooping
... View more
10-06-2019
05:08 AM
1 Kudo
@ThanhP I can see you are accessing the web shell UI on 192.168.1.37 yet the sandbox is pointing to 172.18.0.2:8020. Now if you type in your browser http://192.168.1.37:1080/splash2.html you should land on the splash screen Choose the Ambari UI link , this should give you the normal Ambari user UI if you reset your password before as I had stated please use that admin/[password] combination and you will realize that none of the components are started HDFS, YARN, MR, HIVE, HBASE etc Note: some components are in maintenance mode unless you want any of them beware of the dependencies e.g ATLAS must have HBASE running etc.. that could explain why you got the error "Failed on connection error java.net.Connection" At a certain point during your startup, "Timeline Service V1.5" will error out with the message to do with "safemode on" get to the shell as hdfs $ hdfs dfsadmin -safemode get This should indicate it's ON so get it out of safemode $ hdfs dfsadmin -safemode leave And again use the option Ambari start all it will pick up from the last point of failure, after the components startup you can simply access them eg the name node UI will automatically point to http://sandbox-hdp.hortonworks.com:50070/dfshealth.html#tab-overview Please do that an revert
... View more
10-05-2019
10:52 AM
@Mitchel Is your virtual box running on Ubuntu? How much memory are you allocating to the Virtualbox and also descript your network setup Page1 Page2 page01See the attached
... View more
10-05-2019
04:30 AM
1 Kudo
@ThanhP What @Herman was trying to explain is what happens in the background and you cannot influence or manually replicate the process. The screenshots I attached are the best you can get to resolve the issue because the screenshots are from exactly the same sandbox you are using. So follow those steps and revert
... View more