Member since
03-29-2016
38
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
599 | 06-25-2016 09:25 AM |
10-05-2017
04:38 AM
Thanks Aditya
... View more
10-05-2017
04:38 AM
Thanks Sarath.
... View more
10-04-2017
05:25 AM
Hello All, would like to know the status of Atlas in HDP 2.6 and HDP 3.0 Is it in technical preview stage or GA stage? Thanks
... View more
07-14-2017
02:01 PM
Is there a way to install ELK through Ambari? https://github.com/maweina/ambari-elk-service - Saw this github page. But, was not sure, whether there are any plugins available apart from the link listed above.
... View more
12-16-2016
12:26 AM
@Eyad Garelnabi, thanks for your response.
... View more
12-16-2016
12:25 AM
@Artem Ervits, thanks for your pointed replies giving a lot of clarity. Yes, I do understand this. But, I now need to find a way. There are certain clusters which are HDP 2.4 (for some specific reasons) and so this means that we cannot have data governance through the easier tag based policy approach. Still can do things with Ranger, but it has to be done per service basis. What I like with the tags is that it cuts across the services (hdfs, hive,.....). Even new data that enters can be classified to have the same tag and all the rules of access get automatically inherited.
... View more
12-15-2016
12:49 PM
1 Kudo
HDP 2.4.3 - Ranger 0.5.2 and Atlas 0.5 Atlas UI does not provide a way to create a tag through UI. But, I understand that we can create a tag using the API provided by Atlas. Now, can Ranger 0.5.2 get/know about this tag information that was created in Atlas?
... View more
Labels:
11-09-2016
04:35 AM
@Artem Ervits will definitely submit an article once it is implemented.
... View more
11-08-2016
05:54 AM
Thanks Artem Ervits. We were considering to extend the HadoopSink class for our purpose - say GangliaSink. The instance of this GangliaSink class will run in each of the machines (similar to the gmond of Ganglia). In this case, we may have to probably install gmetad in one of the machines in the cluster. We will check out the options provided by you too. Thank you so much.
... View more
11-02-2016
04:32 PM
There is a DC wide Ganglia monitoring in place where in the clusters have been HDP 2.2 as of today. Now some of the new clusters need to have HDP 2.4. HDP 2.4 does not support Ganglia. Is there a way by which Ambari metrics can push data to Ganglia for the details needed by Ganglia.
... View more
11-02-2016
04:57 AM
Hello Ajaysingh, did you find any pointers for Ambari Metrics integration or interaction with datacenter wide ganglia. Pls. do share your experience or pointers for the same. Advance thanks 🙂
... View more
11-02-2016
04:56 AM
Hello Jeff, your pointers for Ambari metrics integration or interaction with Nagios is very useful. Are there any similar instructions for integration with enterprise ganglia
... View more
09-23-2016
09:58 AM
Thanks @deepak sharma. Any roadmap plans - say we may see this feature in some x.y release in a year or so?
... View more
09-23-2016
06:19 AM
1 Kudo
Can Ranger be used for data entitlements in an object store. Essentially does Ranger have plugins for objects stores like Amazon S3 or VMWare ECS (elastic cloud storage)
... View more
Labels:
09-23-2016
06:16 AM
1 Kudo
Can Ranger policies be used for doing data governance across multiple clusters. A sample scenario: Let us say I have around 10 clusters. For all these clusters, want to apply same ranger policies for data access - which user can access what data through Ranger.
... View more
Labels:
06-25-2016
09:35 AM
Pls. note the fact that the namepsaceid referred here is not the one you find in the file /hadoop/hdfs/namenode/current/VERSION. But, it is the value of the following property - dfs.nameservices
... View more
06-25-2016
09:30 AM
Thanks Kuldeep. for your inputs. Finally found the reason - the value should be the namespace that we have chosen for the cluster - reason - the cluster I was trying is a HA cluster. So, if we put a specific host name, we will be in trouble, if the host is not available (if it is down). By keeping the namespace, things are better. Thanks for your inputs.
... View more
06-25-2016
09:25 AM
Got it! fs.defaultFS - This is in core-site.xml. The value should be set to hdfs://namespaceid (where namespace id is the namespace that has been defined in the cluster). It works
... View more
06-24-2016
06:30 PM
@Kuldeep - tried some hadoop operations like ls or put every command is failing as each of the requests is connecting to localhost:8020 rather than any of the namenode or standby name node. Checked the configs involvng 8020. see the attached file 8020.jpg
... View more
06-24-2016
06:19 PM
@Kuldeep - Yes, the /etc/hosts file on all the nodes (including data nodes) have the right details for namenode and other nodes in the cluster. True, it is really not clear, why datanode is trying to connect to 8020 in the localhost. It should have contacted the namenode. This is a fresh cluster created and no operations have started yet.
... View more
06-24-2016
05:42 PM
1 Kudo
HDP-2.3.4.7-4 Ambari Version 2.2.1.1 All services are up and running except for History server. Could not find any related errors in namenode or data node logs. Following is the error reported by Ambari. File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 191, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT -T /usr/hdp/2.3.4.7-4/hadoop/mapreduce.tar.gz 'http://standbynamenode.sample.com:50070/webhdfs/v1/hdp/apps/2.3.4.7-4/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.
{
"RemoteException": {
"exception": "ConnectException",
"javaClassName": "java.net.ConnectException",
"message": "Call From datanode.sample.com/10.250.98.101 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused"
}
} Status code: 403 indicates that the request is correct, but not probably authroized? Any pointers will be helpful. Thanks,
... View more
Labels:
06-10-2016
11:57 AM
Hello Alexandru, It worked. Thank you.Now, I am checking for similar properties for Resource Manager, Hive, Oozie to make them highly available from the blueprint itself rather than creating the cluster using Ambari and then manually making it HA for those services. Thanks, Mohan
... View more
06-10-2016
10:38 AM
Hello Alexandru, Great. Think that it should be the reason - 100% :-). Will verify the same and get back to mark this as an answer. Your quick pointer to the issue has been really really helpful. Thank you so much. brgds, Mohan
... View more
06-10-2016
10:20 AM
hdfs-ha-blueprint.txtclustertemplate.txt@Alexandru Anghel: Thanks for a quick revert. Saw the link that you posted. It is something similar to the ones that we had tried. Yes, the ZKFC component is installed in the same host where the NAMENODE component is installed. Attaching two files: blueprint and the template
... View more
06-10-2016
08:33 AM
Wanted to set up HA (active name node and standby name node)cluster. We did not want to have a secondary name node to be present. Just two name nodes and one of them will be active and the other as standby. Used the Ambari blueprint exactly as outlined in the link: https://cwiki.apache.org/confluence/display/AMBARI/Blueprint+Support+for+HA+Clusters Getting an error: {\n "status" : 400,\n "message" :
"Cluster Topology validation failed. Invalid service component
count: [SECONDARY_NAMENODE(actual=0, required=1)]. To disable topology
validation and create the blueprint, add the following to the end of the url:
\'?validate_topology=false\'"\n} Tried to disable topology validation with validate_topology=false,
Blueprint registered but while creating cluster creation failed with the error given below java.util.concurrent.ExecutionException: java.lang.Exception:
java.lang.IllegalArgumentException: Unable to update configuration property
'dfs.namenode.https-address' with topology information. Component 'NAMENODE' is
mapped to an invalid number of hosts '2'. Any pointers to sort this out will be very helpful. Thanks, Mohan
... View more
Labels:
06-07-2016
05:18 PM
1 Kudo
Does ranger support Cassandra ACLs like the way it supports HIVE
... View more
05-03-2016
06:52 AM
@jaimin - The fourth point gave an idea to isolate. Checked the logs and in the hdfs-audit.log, we found the issue. Surprisingly, it was due to the way we named the worker nodes. We had given a "_" to the worker node names as worker_1_xyz, worker_2_xyz etc. This "_" has resulted in the issue. Now, we changed the naming and created a new cluster and the services are running without any issues. Infact we had two other hive related services also failing due to the same issue. Thanks for your helpful pointers.
... View more
05-03-2016
12:05 AM
@Ryan Cicak - Thanks for the reply. No, did not try that Will do the same Remember seeing something related to hive also not getting started. Started to check on history server part first - as a one by one journey to take. Will post the details after checking.
... View more