Created on 09-16-2016 07:44 AM - edited 09-16-2022 03:39 AM
Hi, i'm new to cloudera. i managed to install CDH 5 on my ubuntu machine. Now i'm trying to run the sample MapReduce program given in HUE. But i'm facing some problems. When i access the Namenode UI, i'm able to see the following.
Safe mode is ON. The reported blocks 0 needs additional 804 blocks to reach the threshold 0.9990 of total blocks 804. The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
And my datanode is completely down. i'm getting the following error in logs.
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode Initialization failed for Block pool BP-188106977-192.168.1.83-1467389018936 (Datanode Uuid bb6abb9f-79f6-47ce-bb2d-af7ff290b32f) service to humworld-Inc/127.0.1.1:8022 Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(127.0.0.1, datanodeUuid=bb6abb9f-79f6-47ce-bb2d-af7ff290b32f, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=cluster117;nsid=988195820;c=0) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:943) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5079) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1156) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:96) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:29184) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) Sep 16, 8:01:26.387 PM INFO org.apache.hadoop.hdfs.server.datanode.DataNode Block pool BP-188106977-192.168.1.83-1467389018936 (Datanode Uuid bb6abb9f-79f6-47ce-bb2d-af7ff290b32f) service to humworld-Inc/127.0.1.1:8022 beginning handshake with NN
Please help me with this.
Thanks,
Karthi
Created 09-16-2016 08:14 AM
Hello,
It may be that the configuration needs to be refreshed (so that the list of datanode hosts is up-to-date).
Check the Cloudera Manager home page and see if you see a stale configuration icon. If so, you can click on that icon to update the configuration as Cloudera Manager knows it ineeds to do... for some more information on stale configurations, see:
If that does not work, then let us know
Ben
Created 09-16-2016 10:09 AM
I think i'm having many problems, there are 12 severe health issues and my agent is also not running. i'm getting the following error message.
This host has been out of contact with the Cloudera Manager Server for too long. This host is not in contact with the Host Monitor.
They have suggested 3 actions which are;
Change Host Process Health Test for all hosts Change Host Process Health Test for this host Upgrade the Cloudera Manager Agent software
Please suggest which action i has to take.
Also since i'm completely new to Apache hadoop and Cloudera, kindly suggest where i has to start CHD or Clouder quick start vm. Also what is the difference between CDH & quickstart vm.
Thanks,
Karthi
Created 09-16-2016 10:23 AM
Hello Karthi,
Thanks for bringing your questions to this community.
I'll start by saying that the health issues you see stem from an inability for the Cloudera Manager agents on your host/hosts to communicate with Cloudera Manager on port 7182 and for your agent to communicate with the Host Monitor (to upload metrics) on port 9995.
It could be your agent is not started. On your host or hosts, try running (as root or sudo):
service cloudera-scm-agent start
If you are lucky, that's all that needs to be done. If the agents are running and you still see this problem, this gets into more complex troubleshooting. In that case, I'd start by making sure that iptables is not restricting ports (turn it off on all hosts if it is on), try telneting or nc to the Cloudera Manager host's port 7182.
All that said, you may want to walk our quickstart tutorial here:
http://www.cloudera.com/developers/get-started-with-hadoop-tutorial.html
It might probe to be a good start since you are new to Cloudera and hadoop.
We love that you are giving this a try so keep the questions coming.
NOTE: for troubleshooting agent issues, your best friend is the agent log; by default it is:
/var/log/cloudera-scm-agent/cloudera-scm-agent.log
Look for python stack traces and error messages that could help you and us diagnose the issue.
By default, the agent will try to heartbeat to the server that is configured in the "server_host" configuration in the agent's configurtation file ( /etc/cloudera-scm-agent/config.ini ). So make sure that the host is that of your Cloudera Manager and it is resolvable on this host.
That's a lot, so I'll stop there. Let us know if you get stuck.
Cheers,
Ben
Created 09-16-2016 10:58 AM
Hi Ben,
Thanks for your support. i will try these things.
Karthi
Created 06-13-2018 10:37 PM
Did you able to fix the issue, I am also facing the same. if you are able to fix it, can you pls share the resolution.