Member since
11-05-2017
20
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3177 | 02-15-2018 05:51 AM |
02-21-2018
02:21 PM
In my case, it was a PRIVATE_IP variable in Profile. Once removed, it started working.
... View more
02-19-2018
05:30 AM
@pdarvasi Finally found the solution to this issue. Here are my findings: When we started HDP using cloudbreak, HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute config groups) which had three drives and one drive was in TBs. Our default configuration to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity (around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our existing datanode storage had some supporting directories and files in KBs which had resulted in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher disk capacity (greater than 3.5 %) I had tried this and it worked. No more changing WASB URI, therefore, keeping it as a default storage. However, I am thankful to you for making suggestions.
... View more
02-19-2018
05:29 AM
@pdarvasi Finally found the solution to this issue. Here are my findings: When we started HDP using cloudbreak, HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute config groups) which had three drives and one drive was in TBs. Our default configuration to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity (around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our existing datanode storage had some supporting directories and files in KBs which had resulted in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher disk capacity (greater than 3.5 %) I had tried this and it worked. No more changing WASB URI, therefore, keeping it as a default storage. However, I am thankful to you for making suggestions.
... View more
02-21-2018
12:37 PM
@wbu Thank you for the post but could you please help me understand that how you have created HORTONWORKS.COM (REALM) and "hadoopadmin" principal on mac for which you have generated a ticket using principal's password? I am using "kadmin -l" to init a new REALM "EXAMPLE.COM" in line with cluster REALM and also the username "hadoopadmin" but when I try adding a REALM using "init -r <realm name>", I get:
kadmin: create_random_entry(krbtgt/EXAMPLE.COM@EXAMPLE.COM): randkey failed: Principal does not exist
init -r <realm name>
Or if I try adding a principal "add -r hadoopadmin@EXAMPLE.COM", I get:
kadmin: adding hadoopadmin@EXAMPLE.COM: Principal does not exist
vi /Library/Preferences/edu.mit.Kerberos OR vi /etc/krb5.conf
.example.com = "EXAMPLE.COM"
example.com = "EXAMPLE.COM"
[libdefaults]
default_realm = "EXAMPLE.COM"
dns_fallback = "yes"
noaddresses = "TRUE"
[realms]
EXAMPLE.COM = {
admin_server = "ad.example.com"
default_domain = "example.com"
kdc = "ad.example.com"
}
As far as I understand, on mac machine following steps must be performed before doing the above given steps:
1. Create vi /etc/krb5.conf
2. Create a new REALM "EXAMPLE.COM" (same as Hadoop cluster Kerberos REALM)
2. Create a new user principal "hadoopadmin" (same as Hadoop cluster Kerberos principal used to access the services)
3. Then only I can create a ticket (kinit) with the same password used in Step 2 while creating the user principal
Regards,
... View more