Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

The hostname and canonical name for this host are not consistent

avatar
Expert Contributor

Hi All,

 

When I added a new node in cloudera cluster, The node went to bad health and shows error message like "The hostname and canonical name for this host are not consistent when checked from a Java process" Can any one help us to fix this issue.

 

 

Thanks,

Sathishkumar M

Thanks,
Sathish (Satz)
2 ACCEPTED SOLUTIONS

avatar
If you're running Red Hat or CenOS, ensure the /etc/sysconfig/network file
has the HOSTNAME field with the FQDN, not just the host name.

HOSTNAME=server1.example.com (good)
HOSTNAME=server1 (bad)

Edit this and reboot the host. Just restarting network services might not
work. Then let us know if the error still continues to be logged.

Regards,
Gautam Gopalakrishnan

View solution in original post

avatar
That is hard to say. Please run this python one-liner on a host where you
don't get complaints and on the host you just fixed. Do they look similar?

# python -c "import socket; print socket.getfqdn(); print
socket.gethostbyname(socket.getfqdn())"


Regards,
Gautam Gopalakrishnan

View solution in original post

25 REPLIES 25

avatar
Explorer

I am having same problem here.

I used to install Cloudera Manager on Centos 7 with no issues.

But in Centos 6(since impala is not supported on Centos 7, had to try Centos 6), getting this error on all hosts. 

My /etc/sysconfig/network file is like this:

 

$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=cixa.c.gib-data.internal

I rebooted all hosts after setting HOSTNAME in /etc/sysconfig/network file.

All other nodes share same configuration. 

Host inspection gives no warning.

 

In host status, this error message is shown:

"The hostname and canonical name for this host are not consistent when checked from a Java process"

 

$ python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())"
cixa.c.gib-data.internal
10.128.0.2

All hosts answer correctly to nslookup and host commands.

All hosts can shh and ping to all others with FQDN.

avatar

rampo wrote:
But in Centos 6(since impala is not supported on Centos 7, had to try Centos 6), getting this error on all hosts.

 

 

This is not true, our latest versions of CDH do run on CentOS 7 which means Impala is supported on it as well.

e.g. 5.11.1

https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh_download_511.html

Regards,
Gautam Gopalakrishnan

avatar
Explorer

I have installed Cloudera Manager 5.11 on CentOS7. Unfortunately on Parcels page, it says:

Impala is not supported on RHEL 7.

That's the whole reason I fallback to CenOS 6.

 

Also when I check Impala Parcels on http://archive.cloudera.com/impala/parcels/latest/ I see there is no EL7 parcel.

Here is the complete list of Impala parcels:

 

IMPALA-2.1.0-1.impala2.0.0.p0.1995-el5.parcel
IMPALA-2.1.0-1.impala2.0.0.p0.1995-el6.parcel
IMPALA-2.1.0-1.impala2.0.0.p0.1995-lucid.parcel
IMPALA-2.1.0-1.impala2.0.0.p0.1995-precise.parcel
IMPALA-2.1.0-1.impala2.0.0.p0.1995-sles11.parcel
IMPALA-2.1.0-1.impala2.0.0.p0.1995-squeeze.parcel

 

 

avatar

@rampo wrote:
Also when I check Impala Parcels on http://archive.cloudera.com/impala/parcels/latest/ I see there is no EL7 parcel.

I'm glad you mentioned the parcel repo. With the latest releases of CDH, Impala is included in the CDH parcel itself and doesn't need a separate repository. The current version of Impala in CDH 5.11.1 is 2.8.0, see this URL: https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh_package_tarball_51...

 

So this means, if you're able to create a CDH cluster on a bunch of CentOS 7 hosts, you should be able to add the Impala service as well. Please don't add that parcel quoted above as it's not required any longer.

Regards,
Gautam Gopalakrishnan

avatar
Explorer

Thank you @GautamG

I wasn't aware of that until you clearly stated that.

 

avatar
New Contributor

I had the same issue on one of the node and it was related with /etc/resolv.conf entry. Changed the nameserver details to that of other nodes and that fixed it.