Member since
01-07-2016
21
Posts
11
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1140 | 01-26-2016 03:35 AM |
01-27-2016
08:12 AM
1 Kudo
Sorry guys managed to finally start the namenode by changing the value to 0.0.0.0. It works perfectly.
... View more
01-27-2016
06:42 AM
Hi Neeraj, Can you help me to check if my hosts file is configured correctly? Just to let you know that I am using AWS EC2 as the server and everytime I restart my server it will give me a new IP address. Best Regards David
... View more
01-27-2016
06:35 AM
Hi Neeraj, I have tried but could not still start it. Best Regards David
... View more
01-27-2016
03:21 AM
hadoop-hdfs-namenode-ip-172-30-1-137ap-southeast-1.zip It is still the same. This is the logfile.
... View more
01-27-2016
03:14 AM
[root@ip-172-30-1-137 sbin]# hadoop dfsadmin -safemode leave
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. safemode: Call From ip-172-30-1-137.ap-southeast-1.compute.internal/52.77.231.10 to ip-172-30-1-137.ap-southeast-1.compute.internal:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused your command return this error that's the reason I am using hdfs instead.
... View more
01-27-2016
03:12 AM
[root@ip-172-30-1-137 sbin]# hdfs dfsadmin -safemode leave
safemode: Call From ip-172-30-1-137.ap-southeast-1.compute.internal/52.77.231.10 to ip-172-30-1-137.ap-southeast-1.compute.internal:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused I am getting the following error message
... View more
01-26-2016
06:00 AM
Hi Artem, I hope I am sending you the right file. hadoop-hdfs-namenode-ip-172-30-1-137ap-southeast-1.zip Best Regards David Yee
... View more
01-26-2016
05:42 AM
Hi Artem, When I tried to start all the components. It fails on the "NameNode Start" step and the following error occurs. resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-ip-172-30-1-137.ap-southeast-1.compute.internal.out Best Regards David
... View more
01-26-2016
04:32 AM
2 Kudos
Hi All, When I go to the Cluster's services, most of the services failed. When I tried to restart each services, it took a long time to start it. Help would be appreciated. services-error-cluster.png Best Regards David Yee
... View more
Labels:
- Labels:
-
Apache Ambari