Member since
09-10-2015
27
Posts
9
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
779 | 02-03-2016 09:19 PM |
05-03-2016
06:29 PM
I got it. So the documented behavior is accurate if the client is not directly running on a DataNode, but does not account for WebHDFS or a client directly connected to the node. Thanks!
... View more
05-03-2016
04:35 PM
1 Kudo
When writing files larger than one block, how are blocks distributed across the datanodes? Documentation seems to indicate large files are split across datanodes (whenever possible), but I'm not sure this is always the case.
... View more
- Tags:
- block
- Hadoop Core
- HDFS
Labels:
- Labels:
-
Apache Hadoop
02-03-2016
09:19 PM
The API is not an option at this point. If there's no way to do this in Ambari natively, that's fine - I just need to know that. But workarounds of any kind do not meet the need.
... View more
02-03-2016
06:03 PM
🙂 For purposes of this question, yes.
... View more
02-03-2016
06:00 PM
2 Kudos
Do we have documentation available around Ambari Flume management? Based on what I know so far, this is what would make sense to me in the real world, assuming an admin does not want to be involved in setting up / maintaining / managing Flume agents: 1) Set up a separate Ambari Server instance for Flume management. (Treat Flume Configs tab like a View, in essence... if that's doable?) Thus, Flume "admins" (developers) don't have access to HDP cluster management via Ambari. 2) Add a single HDP edge node (logically speaking, at least) to the Ambari instance (Is it possible for two Ambari instances to manage different services on the same node?), as well as all other servers on which Flume agents are to be installed via Ambari add hosts. 3) Create config groups based on server identity - ex: all web servers might have identical log tracking, the HDP edge node(s) would be its (their) own config group, etc. I don't have a test environment robust enough to try this out, so any help is appreciated. Thanks!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Flume
12-15-2015
11:28 PM
Ahhh, perfect. Thanks!
... View more
12-15-2015
10:51 PM
Here's part 2 of the answer, with the screenshot showing my current proxyuser settings for falcon and oozie: I realize it's possible I've missed something...
... View more
12-15-2015
10:49 PM
Balu - here are the results of the cURL from the Falcon server -- it works fine: As far as the log file goes, the error I'm getting back indicates failure because Falcon is not set up as a proxy user. Here's the pertinent snippet, I think: However, as I will show in my next post, I have set up both falcon and oozie proxyuser settings as * so I'm not sure why that would be an issue here.
... View more
12-15-2015
04:05 PM
I am attempting to create a Falcon cluster entity using the Web UI. **My Environment** I have a functioning Oozie Server (have successfully run workflows from the command line, and I can access the Oozie Web UI without issue). I have a three-node cluster running in Docker. Both Falcon and Oozie are installed on node2. **The Error** Note: The way name resolution is set up, FQDNs are not required in my environment, or rather, "node2" is the FQDN equivalent. **Troubleshooting So Far** I have tried replacing node2 with the IP address. I get the same error: I have confirmed that I can access this URL via the Oozie Web UI. Just to be pedantic, I also performed a port scan to confirm that port 11000 was truly open on the server. What could be causing this error? Are there additional troubleshooting steps I can take?
... View more
Labels:
- Labels:
-
Apache Falcon
11-30-2015
03:20 PM
Steve - that's a good point. The question, then, is why make it available as a cluster-level service at all? What is the benefit of providing Slider as an installable option, or providing a Slider view in HDP?
... View more
11-21-2015
04:30 AM
1 Kudo
Ah, gotcha. So installing the Slider View is functionally equivalent to installing Slider on the cluster, assuming you are launching applications from the View. But to make slider available via CLI, you must install it at the cluster level. Makes sense!
... View more
11-20-2015
09:46 PM
Successfully running application, Slider View:
List of installed services, no Slider:
... View more
11-20-2015
09:42 PM
I am running HDP 2.3 and was capturing screenshots for a lesson under development for Slider. I created a Slider view, created the yarn home directory in HDFS, copied an HBase slider package to the appropriate location, and restarted Ambari server. It was then I realized the I had forgotten to install the Slider service on my cluster. Thinking it would be a great error message to capture, I went back to the Slider View, created the app per the wizard, and deployed - thinking it would error out and tell me that the service was not installed. To my surprise, the application ran without a problem, and the Slider View allowed me to perform the Flex and other actions with no issues. Because this is for a training class, I need to understand why this worked. Is there something special about the HBase slider package we make available that allows it to run on a cluster where the service is not installed, or does the HDP or Ambari support slider applications without the need to explicitly install the service? If that's true, what is it that installing Slider on HDP actually accomplishes? I will post screenshots in a comment below. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
11-19-2015
09:55 PM
Here's the exact command and error returned. Also, here's the documentation link:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_yarn_resource_mgt/content/ref-635c245f-d56c-49c0-97b7-a789012ea71d.1.html
... View more
11-19-2015
09:49 PM
According to our docs, I should be able to download a Slider package for HBase at the following link:
http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos6/2.x/updates/2.3.2.0/slider-app-packages/hbase/slider-hbase-app-package-1.1.1.2.3.2.0-2950.zip However, I am getting a "forbidden" error when I attempt to access/download anything. Is there a reason for this? Is this available at an alternative location?
... View more
Labels:
- Labels:
-
Apache HBase
11-11-2015
02:12 PM
That was it! Thanks Jonas!
... View more
11-11-2015
05:04 AM
And remember, I am writing a publicly available training course, so please, no answers that require hacking that you would not suggest the average administrator to attempt. 🙂
... View more
11-11-2015
05:01 AM
UPDATE: I finally ran across this issue, which at least explains why the process is not working: https://issues.apache.org/jira/browse/AMBARI-12811 The question now, then, becomes "How do I update the repo_version table prior to a blueprint-based installation?"
... View more
11-11-2015
12:12 AM
1 Kudo
I have successfully uploaded a blueprint to an Ambari server. I am trying now to configure default repositories for HDP installation in our classroom lab environment, as per instructions here:
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Step4:SetupStackRepositories%28Optional%29 I have created the a file, named HDPRepo.test, per the instructions.
The command I am using is based on what worked for the blueprint upload, but pointed at the repo URI per the instructions:
curl -u admin:admin -i -H
"X-Requested-By: jedentest1" -X POST -d @HDPRepo.test
http://node1:8080/api/v1/stacks/HDP/versions/2.3/operating_systems/redhat6/repositories/HDP-2.3/
I can't find this demo'd anywhere on the web, so I'm
flying a bit blind. The error being returned is:
{ "status" : 500, "message" : "Cannot create repositories."}
My suspicion is that the
documentation is missing some minor piece (possibly a cURL add-on, or perhaps the API URI isn't quite right) that makes
this update of the base_url value work. Has anyone actually done this that can point out what I need to modify in my command? Thanks!
... View more
Labels:
10-29-2015
09:39 PM
Also - is there a quicker / easier / cleaner way to test the node labels that you used other than the one listed in the docs?
... View more
10-29-2015
09:38 PM
Again, with caveats. For example, the sample command given to test the settings says to run it as yarn, however it failed for me as some permissions now apparently required the hdfs user. My bottom line, which would be nice to validate, is this: 1) The command line setup of users and folders is unnecessary, and Ambari sets them up slightly differently than our docs 2) Permissions are also different for folders (yarn hadoop vs. yarn yarn...) 3) The sample hadoop jar command to test (which I ran as yarn jar) requires switching to hdfs rather than yarn. Does that sound about right?
... View more
10-29-2015
04:23 PM
2 Kudos
Does anyone have (or can you point me to) a link that explains the steps of using YARN node labels in HDP 2.3 being managed by Ambari 2.1? Our existing documentation is for manual configuration only. I've clicked the enabled button, and have already figured out that Ambari creates slightly different directory structures, etc. than are listed. I can keep going with trial and error, but if someone has already documented the process, caveats, etc. that would be fantastic. Thanks!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache YARN