Member since
11-05-2017
20
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1448 | 02-15-2018 05:51 AM |
02-06-2019
02:24 PM
@Kartik Ramalingam Thank you for your wonderful and helpful post! Ranger authorization is still incorrect in the post. Ranger authorization is already enabled in initial topology. However, Step 7 must describe it to disable Ranger authorization by modifying the parameter value from XASecurePDPKnox to AclsAuthz. Also, Step 7 example needs to be corrected. Regards, Sakhuja
... View more
08-23-2018
03:23 PM
So whenever you change your LDAP configuration, you need to sync your ambari configuration with it. You can use: ambari-server sync-ldap --<all/existing/users=LDAP_SYNC_USER> And give your ambari credentials to have a successful syncronization. Regards, Sakhuja
... View more
02-21-2018
02:21 PM
In my case, it was a PRIVATE_IP variable in Profile. Once removed, it started working.
... View more
02-21-2018
12:37 PM
@wbu Thank you for the post but could you please help me understand that how you have created HORTONWORKS.COM (REALM) and "hadoopadmin" principal on mac for which you have generated a ticket using principal's password? I am using "kadmin -l" to init a new REALM "EXAMPLE.COM" in line with cluster REALM and also the username "hadoopadmin" but when I try adding a REALM using "init -r <realm name>", I get:
kadmin: create_random_entry(krbtgt/EXAMPLE.COM@EXAMPLE.COM): randkey failed: Principal does not exist
init -r <realm name>
Or if I try adding a principal "add -r hadoopadmin@EXAMPLE.COM", I get:
kadmin: adding hadoopadmin@EXAMPLE.COM: Principal does not exist
vi /Library/Preferences/edu.mit.Kerberos OR vi /etc/krb5.conf
.example.com = "EXAMPLE.COM"
example.com = "EXAMPLE.COM"
[libdefaults]
default_realm = "EXAMPLE.COM"
dns_fallback = "yes"
noaddresses = "TRUE"
[realms]
EXAMPLE.COM = {
admin_server = "ad.example.com"
default_domain = "example.com"
kdc = "ad.example.com"
}
As far as I understand, on mac machine following steps must be performed before doing the above given steps:
1. Create vi /etc/krb5.conf
2. Create a new REALM "EXAMPLE.COM" (same as Hadoop cluster Kerberos REALM)
2. Create a new user principal "hadoopadmin" (same as Hadoop cluster Kerberos principal used to access the services)
3. Then only I can create a ticket (kinit) with the same password used in Step 2 while creating the user principal
Regards,
... View more
02-19-2018
05:30 AM
@pdarvasi Finally found the solution to this issue. Here are my findings: When we started HDP using cloudbreak, HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute config groups) which had three drives and one drive was in TBs. Our default configuration to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity (around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our existing datanode storage had some supporting directories and files in KBs which had resulted in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher disk capacity (greater than 3.5 %) I had tried this and it worked. No more changing WASB URI, therefore, keeping it as a default storage. However, I am thankful to you for making suggestions.
... View more
02-19-2018
05:29 AM
@pdarvasi Finally found the solution to this issue. Here are my findings: When we started HDP using cloudbreak, HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute config groups) which had three drives and one drive was in TBs. Our default configuration to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity (around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our existing datanode storage had some supporting directories and files in KBs which had resulted in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher disk capacity (greater than 3.5 %) I had tried this and it worked. No more changing WASB URI, therefore, keeping it as a default storage. However, I am thankful to you for making suggestions.
... View more
02-15-2018
05:51 AM
Thank you @pdarvasi for answering it in another post: https://community.hortonworks.com/questions/171210/hortonworks-cloudbreak-default-hdfs-as-azure-wasb.html
... View more
02-14-2018
01:52 PM
@pdarvasi Thank you so much for answering and this is what I was looking for. Let me see how we can make these working.
... View more
02-14-2018
12:25 PM
Thank you for your quick response. I don't have any plan to keep the data on local HDFS and therefore, using Azure WASB for all storage (no worry of data loss). Redundancy will be covered by WASB and no plans to keep data on local HDFS. So, when I am using default storage as WASB then why I should add some capacity to local HDFS for decommissioning?
... View more
02-14-2018
11:41 AM
Configured Azure WASB storage as a default HDFS location through Cloudbreak, which had made Hadoop local HDFS capacity as 0 in Ambari (100 % utilized). I have default replication as 1 but now when I am trying to decommission a node, datanode tries to rebalance some 28KB of data to another available datanode. However, our HDFS has 0 capacity and therefore, decommissioning fails with below given error: New node(s) could not be removed from the cluster. Reason Trying to move '28672' bytes worth of data to nodes with '0' bytes of capacity is not allowed Getting the information on cluster shows that default local HDFS is still used for some KB space which is getting rebalanced whereas available capacity is 0: "CapacityRemaining" : 0,
"CapacityTotal" : 0,
"CapacityUsed" : 131072,
"DeadNodes" : "{}",
"DecomNodes" : "{}",
"HeapMemoryMax" : 1060372480,
"HeapMemoryUsed" : 147668152,
"NonDfsUsedSpace" : 0,
"NonHeapMemoryMax" : -1,
"NonHeapMemoryUsed" : 75319744,
"PercentRemaining" : 0.0,
"PercentUsed" : 100.0,
"Safemode" : "",
"StartTime" : 1518241019502,
"TotalFiles" : 1,
"UpgradeFinalized" : true,
There is an ambari decommissioned jar used by Cloudbreak to check, if HDFS is running out of space. Is there a way to change this jar? if (remainingSpace < safetyUsedSpace) {
throw new BadRequestException(
String.format("Trying to move '%s' bytes worth of data to nodes with '%s' bytes of capacity is not allowed", usedSpace, remainingSpace)
);
} Reference link: https://github.com/hortonworks/cloudbreak/blob/1.16.4/core/src/main/java/com/sequenceiq/cloudbreak/service/cluster/flow/AmbariDecommissioner.java#L314-L321
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Hortonworks Cloudbreak
02-13-2018
08:21 AM
@pdarvasi I am still stuck at the same issue. Is it possible to edit the jar "AmbariDecommissioner.java"? If yes, where I can find on cloudbreak? Thanks for your help in advance!
... View more
12-12-2017
02:38 PM
@pdarvasi Sorry but still same because I have my default HDFS location as Azure blob storage instead of local. Trying to overcome!
... View more
12-06-2017
02:40 PM
@pdarvasi Thank you for helping in clarifying the question. Let me point the default HDFS location to cloud storage and check if same issue persist. Regardsm Sakhuja
... View more
12-06-2017
05:20 AM
@mmolnar Thank you for your response. These errors are almost 80 % times while I am downscaling, However, 20 % are a success. So, this is what I have done: Cluster configuration: min: 2; max: 3 and cooldown period: 30 mins Master node:1 and worker nodes: 2 Scaling down 1 worker node from the cluster gives this error. HDP Version - 2.5 Cloudbreak Version - 1.16.4 Regards, Sakhuja
... View more
12-05-2017
10:56 AM
Hello Everyone, Does anyone had encountered following issue while scaling in (down scale) your cluster using Periscope scaling policy. Same issue had also been observed while "Removing nodes" from Cloudbreak UI: 12/5/2017 3:55:18 PM hdpcbdcluster - update failed: New node(s) could not be removed from the cluster. Reason Trying to move '8192' bytes worth of data to nodes with '0' bytes of capacity is not allowed I just know that 8192 bytes is the linux default block size. The only way I can scale down the cluster is by manually terminating the machine. Regards, Sakhuja
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
11-28-2017
07:59 AM
Finally, nailed it! Periscope has an additional path prefix which can be found with docker inspection command: docker inspect <periscopeContainer> | grep path Using the same token generated by cloudbreak client has worked.
... View more
11-27-2017
11:48 AM
@Artem Ervits @Neeraj Sabharwal Could you please help in answering this question?
... View more
11-25-2017
07:25 AM
Hello Everyone, I have installed Cloudbreak on Azure VM and provisioned HDP using it which has also enabled Periscope (default) Web UI (Uluwatu). This has helped me setting manual alarms and scaling policy from the UI. However, my aim is to achieve the same through API calls. I was able to connect Cloudbreak using below API method, first generate the access token and then connect to Cloudbreak through API: Token Generation for Cloudbreak Shell (Worked): export TOKEN=$(curl -iX POST -H "accept: application/x-www-form-urlencoded" -d 'credentials={"username":"admin@example.com","password":"password"}' "http://cloudbreakURI:8089/oauth/authorize?response_type=token&client_id=cloudbreak_shell" | grep Location | cut -d'=' -f 3 | cut -d'&' -f 1) API call to Cloudbreak (Worked): curl -k -X GET -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://cloudbreakUI/cb/api/v1/stacks/user Now, I want to do the same thing with Periscope. I have followed the Periscope API "https://periscope.docs.apiary.io" which explains that we need to attach Ambari REST API Endpoint with Periscope. Here I got confused that if Periscope is running by default in Cloudbreak machine and I am able to set alarms in Periscope UI using Ambari alerts then why do I need to attach with a an existing cluster for API calls? Periscope is running in CloudbreakVM: [hduser@cloudbreakVM ~]$ ps -ef | grep periscope
hduser 41503 38823 0 06:55 pts/0 00:00:00 grep --color=auto periscope
root 66067 58001 0 Nov24 ? 00:00:00 /bin/bash /start_periscope_app.sh
root 66075 66067 0 Nov24 ? 00:07:55 java -jar /periscope.jar In one of the github repository, I found below call method to generate the token for Periscope client but it doesn't generate in my case: Token Generation for Periscope client (Doesn't work): curl -iX POST -H "accept: application/x-www-form-urlencoded" -d 'credentials={"username":"admin@example.com","password":"password"}' "http://cloudbreakURI:8089/oauth/authorize?response_type=token&client_id=periscope-client&scope.0=openid&source=login&redirect_uri=http://periscope.client" ERROR: HTTP/1.1 500 Internal Server Error
Server: Apache-Coyote/1.1
Cache-Control: no-store
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Content-Language: en
Content-Length: 0
Date: Sat, 25 Nov 2017 06:57:19 GMT
Connection: close
I also tried adding Ambari REST API endpoint as follows but no luck: Attaching Ambari REST API to Periscope (Doesn't work): curl --include \
--request POST \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
--data-binary "{
\"host\": \"AmbariURI\",
\"port\": \"8080\",
\"user\": \"admin\",
\"pass\": \"admin\"
}" \
'http://AmbariURI:8080/api/v1/clusters/'
ERROR: {
"status": 403,
"message": "Missing authentication token"
}
When I try to directly calls from Ambari REST API, it doesn't return any result: curl -k -X GET -H "Content-Type: application/json" --user admin:admin http://AmbariURI:8080/api/v1/clusters/<clustername>/policies
curl -k -X GET -H "Content-Type: application/json" --user admin:admin http://AmbariURI:8080/api/v1/clusters/<clustername>/alarms Can someone help in connecting Periscope APIs for autoscaling on a running HDP cluster. Regards,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks Cloudbreak
11-06-2017
03:50 PM
Thank you and will do that!
... View more
11-05-2017
10:22 AM
Hello Team, I have couple of questions on provisioning datanodes using cloudbreak where I would like to know the approx time in commissioning a data node to the cluster (assuming node is available with all prerequisites (OS) to get started for commissioning) ? And, if I want to provision 50 datanodes, does the parallelism of provisioning those nodes depends on cores available on Ambari Server? Example: 1. Ambari Server with 32 cores 2. Assuming, 20 cores available on Ambari server for this process 3. Requirement: 50 datanodes (autoscaling) Does this mean that only 20 nodes will be provisioned parallel as a first batch (1 core = 1 datanode provisioning) and then 20 followed by 10? So, if 15 mins are taken to provisioning 1 batch (20 nodes in parallel) then 50 nodes would take 45 mins for provisioning? Regards, Sakhuja
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop