Member since
09-10-2015
27
Posts
9
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1549 | 02-03-2016 09:19 PM |
05-03-2016
06:29 PM
I got it. So the documented behavior is accurate if the client is not directly running on a DataNode, but does not account for WebHDFS or a client directly connected to the node. Thanks!
... View more
05-03-2016
04:35 PM
1 Kudo
When writing files larger than one block, how are blocks distributed across the datanodes? Documentation seems to indicate large files are split across datanodes (whenever possible), but I'm not sure this is always the case.
... View more
Labels:
- Labels:
-
Apache Hadoop
02-03-2016
09:19 PM
The API is not an option at this point. If there's no way to do this in Ambari natively, that's fine - I just need to know that. But workarounds of any kind do not meet the need.
... View more
02-03-2016
06:03 PM
🙂 For purposes of this question, yes.
... View more
02-03-2016
06:00 PM
2 Kudos
Do we have documentation available around Ambari Flume management? Based on what I know so far, this is what would make sense to me in the real world, assuming an admin does not want to be involved in setting up / maintaining / managing Flume agents: 1) Set up a separate Ambari Server instance for Flume management. (Treat Flume Configs tab like a View, in essence... if that's doable?) Thus, Flume "admins" (developers) don't have access to HDP cluster management via Ambari. 2) Add a single HDP edge node (logically speaking, at least) to the Ambari instance (Is it possible for two Ambari instances to manage different services on the same node?), as well as all other servers on which Flume agents are to be installed via Ambari add hosts. 3) Create config groups based on server identity - ex: all web servers might have identical log tracking, the HDP edge node(s) would be its (their) own config group, etc. I don't have a test environment robust enough to try this out, so any help is appreciated. Thanks!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Flume
12-15-2015
11:28 PM
Ahhh, perfect. Thanks!
... View more
12-15-2015
10:51 PM
Here's part 2 of the answer, with the screenshot showing my current proxyuser settings for falcon and oozie: I realize it's possible I've missed something...
... View more
12-15-2015
10:49 PM
Balu - here are the results of the cURL from the Falcon server -- it works fine: As far as the log file goes, the error I'm getting back indicates failure because Falcon is not set up as a proxy user. Here's the pertinent snippet, I think: However, as I will show in my next post, I have set up both falcon and oozie proxyuser settings as * so I'm not sure why that would be an issue here.
... View more
12-15-2015
04:05 PM
I am attempting to create a Falcon cluster entity using the Web UI. **My Environment** I have a functioning Oozie Server (have successfully run workflows from the command line, and I can access the Oozie Web UI without issue). I have a three-node cluster running in Docker. Both Falcon and Oozie are installed on node2. **The Error** Note: The way name resolution is set up, FQDNs are not required in my environment, or rather, "node2" is the FQDN equivalent. **Troubleshooting So Far** I have tried replacing node2 with the IP address. I get the same error: I have confirmed that I can access this URL via the Oozie Web UI. Just to be pedantic, I also performed a port scan to confirm that port 11000 was truly open on the server. What could be causing this error? Are there additional troubleshooting steps I can take?
... View more
Labels:
- Labels:
-
Apache Falcon
11-30-2015
03:20 PM
Steve - that's a good point. The question, then, is why make it available as a cluster-level service at all? What is the benefit of providing Slider as an installable option, or providing a Slider view in HDP?
... View more