Reply
rio
Explorer
Posts: 48
Registered: ‎04-18-2014

Issue while installing Hadoop

Hello,

 

I am trying to install Hadoop on Ubuntu 12.04 using Cloudera manager. It asks for my ip address and or host name in "Specify hosts for your CDH cluster installation". I entered 10.0.2.15, 127.0.0.1, 127.0.1.1 (one at a time and all 3 attempts failed) but no success.

 

I got the ipv4 address (10.0.2.15) using "nm-tool" command

 

My local address is: 127.0.0.1 

 

What am I doing wrong? My Ubuntu is in The Oracle VM VirtualBox. I am trying to install 1 node cluster.

 

I downloaded the installation file from: http://go.cloudera.com/cloudera-express-download.html

 

Your help will be appreciated.

Thank you!

Cloudera Employee
Posts: 8
Registered: ‎04-22-2014

Re: Issue while installing Hadoop

What error do you see when you put your ip address in the wizard on your browser? Also, make sure iptables is turned off.

 

You can also check your /var/log/cloudera-scm-server/cloudera-scm-server.log log file. You will see attempts to SSH into your vm guest when you click Search. Below is a snippet of how mine looks when I hit search:

 

-Hazem

 

 

2014-04-25 12:17:09,163 INFO [454338303@scm-web-5:node.NodeScannerService@287] Request 2 contains 1 nodes
2014-04-25 12:17:09,163 INFO [454338303@scm-web-5:node.NodeScannerService@383] Existing scan of node 10.7.0.175 is too old, rescanning
2014-04-25 12:17:09,163 INFO [454338303@scm-web-5:node.NodeScannerService@291] Finished submitting request 2
2014-04-25 12:17:09,168 INFO [NodeScannerThread-2:node.NodeScanner@219] Beginning scan of node 10.7.0.175 and port 22
2014-04-25 12:17:09,172 INFO [NodeScannerThread-2:node.NodeScanner@243] Canonical hostname is 10.7.0.175
2014-04-25 12:17:09,172 INFO [NodeScannerThread-2:node.NodeScanner@256] Connecting to remote host
2014-04-25 12:17:09,173 INFO [NodeScannerThread-2:node.NodeScanner@277] Disconnecting from remote host
2014-04-25 12:17:09,173 INFO [NodeScannerThread-2:node.NodeScanner@293] Connecting to ssh service on remote host
2014-04-25 12:17:09,174 INFO [454338303@scm-web-5:node.NodeScannerService@124] Request 2 returning 0/1 scans
2014-04-25 12:17:09,177 INFO [NodeScannerThread-2:transport.TransportImpl@152] Client identity string: SSH-2.0-SSHJ_0_8
2014-04-25 12:17:09,187 INFO [NodeScannerThread-2:transport.TransportImpl@161] Server identity string: SSH-2.0-OpenSSH_5.3
2014-04-25 12:17:09,189 INFO [NodeScannerThread-2:transport.KeyExchanger@195] Sending SSH_MSG_KEXINIT
2014-04-25 12:17:09,190 INFO [reader:transport.KeyExchanger@357] Received SSH_MSG_KEXINIT
2014-04-25 12:17:09,214 INFO [reader:kex.DHG14@110] Sending SSH_MSG_KEXDH_INIT
2014-04-25 12:17:09,217 INFO [reader:transport.KeyExchanger@370] Received kex followup data
2014-04-25 12:17:09,217 INFO [reader:kex.DHG14@120] Received SSH_MSG_KEXDH_REPLY
2014-04-25 12:17:09,241 INFO [reader:transport.KeyExchanger@203] Sending SSH_MSG_NEWKEYS
2014-04-25 12:17:09,241 INFO [reader:transport.KeyExchanger@385] Received SSH_MSG_NEWKEYS
2014-04-25 12:17:09,243 INFO [NodeScannerThread-2:node.CmfSSHClient@686] Key exchange took 0.053 seconds
2014-04-25 12:17:09,243 INFO [NodeScannerThread-2:node.NodeScanner@315] Disconnecting from ssh service on remote host
2014-04-25 12:17:09,243 INFO [NodeScannerThread-2:node.NodeScanner@208] Connected to SSH on node 10.7.0.175 with port 22 (latency PT0.001S)
2014-04-25 12:17:09,245 INFO [NodeScannerThread-2:node.NodeScannerService@175] Request 2 observed finished scan of node 10.7.0.175
2014-04-25 12:17:10,196 INFO [454338303@scm-web-5:node.NodeScannerService@124] Request 2 returning 1/1 scans

 

rio
Explorer
Posts: 48
Registered: ‎04-18-2014

Re: Issue while installing Hadoop

error message:

 

installation failed. Failed to receive heartbeat from agent.

Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager Server (check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added (some of the logs can be found in the installation details).

Cloudera Employee
Posts: 8
Registered: ‎04-22-2014

Re: Issue while installing Hadoop

So just to clarify:

1. You have a vm host which you have VirtualBox installed.

2. You setup an Ubuntu vm guest and placed the CM installation file on that Ubuntu vm guest

3. You were able to run the installer successfully (./cloudera-manager-installer.bin) from the command line

4. You then opened a browser and went to what URL (http://hostname:7180)? Which hostname/IP did you point to? 

 

Also, make sure your DNS resolution is working fine.

 

-Hazem

rio
Explorer
Posts: 48
Registered: ‎04-18-2014

Re: Issue while installing Hadoop

Below are the answers:

1.

I tried on both VMware and VirthalBox vur no luxk.



2.

I tried on both Ubuntu 12.04 and CentOS 6.5



3.

yes, I did:

chmod 755 cloudera-manager-installer.bin
sudo ./cloudera-manager-installer.bin



4.

I used hostname as localhost



I am very frustated and tired by now. Your help will be appreciated.



Also, make sure your DNS resolution is working fine.
Cloudera Employee
Posts: 79
Registered: ‎08-29-2013

Re: Issue while installing Hadoop

[ Edited ]

Good morning Rio,

 

Hostname configuration is absolutely critical to put in place before installation. Make sure you define a proper, unique hostname for each system (using methods that may be specific to [ubuntu|centos]), instead of providing 'localhost' as the hostname.

 

For example, to do a CentOS single-node installation I'd have these two files look like this:

 

/etc/hosts

-+-+-+-+-+-+-

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.2.15 node-1.mycluster.internal node-1

 

 

/etc/sysconfig/network 

-+-+-+-+-+-+- 

NETWORKING=yes
HOSTNAME=node-1.mycluster.internal

 

 

In your case, what returns when you run

 

$ python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())"

  

This command should return like this on a properly configured node:

 

 [root@node-1 ~]# python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())" 

node-1.mycluster.internal

10.0.2.15

 

In summary, before running the cloudera-manager-installer.bin make sure you have sorted all name resolution regardless of how many nodes you have.

 

 

NB: With Ubuntu, do not use the 127.0.1.1 IP address.

 

 

rio
Explorer
Posts: 48
Registered: ‎04-18-2014

Re: Issue while installing Hadoop

How do I make sure my  DNS resolution is working fine?

 

[root@localhost ~]# host 127.0.0.1
1.0.0.127.in-addr.arpa domain name pointer localhost.


[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@localhost ~]#


[root@localhost ~]# sestatus
SELinux status: disabled


[root@localhost ~]# users
root root root

 

[root@localhost ~]# service iptables status
iptables: Firewall is not running.

 

rio
Explorer
Posts: 48
Registered: ‎04-18-2014

Re: Issue while installing Hadoop

I gave up Ubuntu and giving a shot at CentOS 6.5

 

[root@localhost ~]# python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())"
localhost.localdomain
127.0.0.1

 

[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@localhost ~]#

 

[root@localhost ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=localhost.localdomain

 

What changes do I need to make?

Cloudera Employee
Posts: 79
Registered: ‎08-29-2013

Re: Issue while installing Hadoop

My earlier post gives an example of the /etc/hosts config and /etc/sysconfig/network. Find your public IP address (I think you said it was 10.0.2.15) and make your files look like these below. Substitute 'node-1.mycluster.internal' to be whatever you want this node to be called, and then tailor the shortname as well based on the FQDN. Mine is just an example:

 

[root@localhost ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.2.15 node-1.mycluster.internal node-1

 

[root@localhost ~]# cat /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=node-1.mycluster.local

 

After making the change, reboot the VM then run

 

$ hostname -f

$ python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())"

 

and post the results of those two commands.

rio
Explorer
Posts: 48
Registered: ‎04-18-2014

Re: Issue while installing Hadoop

I am now in CentOS so I think my public IP address now is 172.16.66.136 ?

 

I got from below command:

[root@node-1 ~]# ifconfig | perl -nle'/dr:(\S+)/ && print $1'
172.16.66.136
127.0.0.1


[root@node-1 ~]# nm-tool

NetworkManager Tool

State: connected

- Device: eth0 [System eth0] --------------------------------------------------
Type: Wired
Driver: e1000
State: connected
Default: yes
HW Address: 00:0C:29:DE:C4:F2

Capabilities:
Carrier Detect: yes
Speed: 1000 Mb/s

Wired Properties
Carrier: on

IPv4 Settings:
Address: 172.16.66.136
Prefix: 24 (255.255.255.0)
Gateway: 172.16.66.2

DNS: 172.16.66.2


==

Below is what I am getting now:

 

[root@node-1 ~]# hostname -f
hostname: Unknown host

[root@node-1 ~]# python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())"
node-1.mycluster.local
Traceback (most recent call last):
File "<string>", line 1, in <module>
socket.gaierror: [Errno -2] Name or service not known
[root@node-1 ~]#

 

==

 

My settings are:

 

[root@node-1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.66.136 node-1.mycluster.internal node-1


[root@node-1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node-1.mycluster.local

==