Member since
03-29-2016
38
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3015 | 06-25-2016 09:25 AM |
06-25-2016
09:35 AM
Pls. note the fact that the namepsaceid referred here is not the one you find in the file /hadoop/hdfs/namenode/current/VERSION. But, it is the value of the following property - dfs.nameservices
... View more
06-25-2016
09:30 AM
Thanks Kuldeep. for your inputs. Finally found the reason - the value should be the namespace that we have chosen for the cluster - reason - the cluster I was trying is a HA cluster. So, if we put a specific host name, we will be in trouble, if the host is not available (if it is down). By keeping the namespace, things are better. Thanks for your inputs.
... View more
06-25-2016
09:25 AM
Got it! fs.defaultFS - This is in core-site.xml. The value should be set to hdfs://namespaceid (where namespace id is the namespace that has been defined in the cluster). It works
... View more
06-24-2016
06:30 PM
@Kuldeep - tried some hadoop operations like ls or put every command is failing as each of the requests is connecting to localhost:8020 rather than any of the namenode or standby name node. Checked the configs involvng 8020. see the attached file 8020.jpg
... View more
06-24-2016
06:19 PM
@Kuldeep - Yes, the /etc/hosts file on all the nodes (including data nodes) have the right details for namenode and other nodes in the cluster. True, it is really not clear, why datanode is trying to connect to 8020 in the localhost. It should have contacted the namenode. This is a fresh cluster created and no operations have started yet.
... View more
06-24-2016
05:42 PM
1 Kudo
HDP-2.3.4.7-4 Ambari Version 2.2.1.1 All services are up and running except for History server. Could not find any related errors in namenode or data node logs. Following is the error reported by Ambari. File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 191, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT -T /usr/hdp/2.3.4.7-4/hadoop/mapreduce.tar.gz 'http://standbynamenode.sample.com:50070/webhdfs/v1/hdp/apps/2.3.4.7-4/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.
{
"RemoteException": {
"exception": "ConnectException",
"javaClassName": "java.net.ConnectException",
"message": "Call From datanode.sample.com/10.250.98.101 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused"
}
} Status code: 403 indicates that the request is correct, but not probably authroized? Any pointers will be helpful. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
06-10-2016
11:57 AM
Hello Alexandru, It worked. Thank you.Now, I am checking for similar properties for Resource Manager, Hive, Oozie to make them highly available from the blueprint itself rather than creating the cluster using Ambari and then manually making it HA for those services. Thanks, Mohan
... View more
06-10-2016
10:38 AM
Hello Alexandru, Great. Think that it should be the reason - 100% :-). Will verify the same and get back to mark this as an answer. Your quick pointer to the issue has been really really helpful. Thank you so much. brgds, Mohan
... View more
06-10-2016
10:20 AM
hdfs-ha-blueprint.txtclustertemplate.txt@Alexandru Anghel: Thanks for a quick revert. Saw the link that you posted. It is something similar to the ones that we had tried. Yes, the ZKFC component is installed in the same host where the NAMENODE component is installed. Attaching two files: blueprint and the template
... View more
06-10-2016
08:33 AM
Wanted to set up HA (active name node and standby name node)cluster. We did not want to have a secondary name node to be present. Just two name nodes and one of them will be active and the other as standby. Used the Ambari blueprint exactly as outlined in the link: https://cwiki.apache.org/confluence/display/AMBARI/Blueprint+Support+for+HA+Clusters Getting an error: {\n "status" : 400,\n "message" :
"Cluster Topology validation failed. Invalid service component
count: [SECONDARY_NAMENODE(actual=0, required=1)]. To disable topology
validation and create the blueprint, add the following to the end of the url:
\'?validate_topology=false\'"\n} Tried to disable topology validation with validate_topology=false,
Blueprint registered but while creating cluster creation failed with the error given below java.util.concurrent.ExecutionException: java.lang.Exception:
java.lang.IllegalArgumentException: Unable to update configuration property
'dfs.namenode.https-address' with topology information. Component 'NAMENODE' is
mapped to an invalid number of hosts '2'. Any pointers to sort this out will be very helpful. Thanks, Mohan
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
- « Previous
-
- 1
- 2
- Next »