Member since
12-07-2015
16
Posts
11
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3173 | 03-20-2016 10:28 PM |
03-21-2016
08:20 AM
@ARUNKUMAR RAMASAMY Cool, glad that I could help. In your case I would restart HDFS services (that's what you probably have done) and Ranger Admin as well. Could you select the answer that helped you most as the "Best Answer"? 🙂
... View more
03-20-2016
10:28 PM
1 Kudo
@ARUNKUMAR RAMASAMY Your ranger-admin-site.xml says that Solr is used as audit source. Since you are not using Solr(?) that might be the reason for you to not see any audit event in the Ranger UI. Did you check your MySQL database directly if audit data is stored there?
... View more
03-18-2016
04:15 PM
1 Kudo
@ARUNKUMAR RAMASAMY Could you share the config as a file? Also, what the output of "find /usr/hdp -type f -name "*ranger-hdfs-plugin*" on the namenode?
... View more
03-02-2016
04:48 PM
Thank you @vpoornalingam! Two options, both of which are more effort than @vsharma's options though.
... View more
03-02-2016
04:47 PM
Great thanks!
... View more
03-02-2016
03:18 PM
1 Kudo
When automating the setup of Ambari, is there an easy way of checking if "ambari-server setup" was already done, in order to skip this step in a script?
... View more
Labels:
- Labels:
-
Apache Ambari
01-18-2016
08:02 AM
Thx. @lmccay I just noticed, that it isn't a matter of age of the docs (it didn't change much), but a matter of how I used WebHDFS PUT. And sometimes asking the question helps solving a problem as well 😉 -> I edited my question to describe the problem that I had.
... View more
01-17-2016
09:37 PM
According to the WebHDFS documentation (https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#CREATE) I need to set two HTTP PUT requests, one to the namenode and one to the data node given by the first request. This works fine as long as I have access to these nodes. How does a PUT work from outside a cluster, where everything between the HTTP client and the cluster is separated by a firewall except the one entrance point, which is Knox? Does it work at all? EDIT: Now it works - here is what went wrong: Just to explain, what my mistake was: I have full access to the cluster, which made me send the first request with Knox's internal IP address. Knox answered me providing an internal address of a data node. That would work for me, since I have full access, but wouldn't for others, who just see the Knox node from outside. When using Knox with it's external IP address, the first request also returns that external IP address.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Knox
01-16-2016
07:24 PM
@Neeraj Sabharwal Sorry, but your answer is equivalent to what I explained did not work. And I got this information from the official doc already. I do not believe to have missed anything: Created a file, deployed it on all nodes, configured the property and restarted. After the restart my topology file had the content of the default one... tried it two times. I will try it in Ambari 2.1.2 if it shows the same behaviour. Where do I open the jira?
... View more