Member since
01-27-2016
19
Posts
13
Kudos Received
0
Solutions
02-07-2016
04:29 PM
1 Kudo
Well that was my "next" question... "what next"? I mean I installed Hadoop but what should come next. Ambari, Zookeeper, HBASE, HCat...(assuming I just want to try - so no need for me to have a ferrari to learn to drive) Since I guess there are as much perspective as there are people, feel free to fire up guys!
... View more
02-07-2016
04:04 PM
1 Kudo
I did it. Hadoop 2.7.2 is installed, configure and it runs on my cluster! thanks guys for your great recommendation
... View more
02-01-2016
07:11 PM
1 Kudo
I learn from my mistake ;-))))
... View more
02-01-2016
07:10 PM
1 Kudo
No reason except the one mentioned earlier. Probably some kind of masochism ... I want to hit the wall at every step and climb over. When I will get this experience I will play with commercial distribution ...
... View more
02-01-2016
06:49 PM
1 Kudo
@Neeraj Sabharwal Hi At this stage I want to go the hard way... no Ambari nor HW (although this should be my next venture). I want to learn bottom up 😉 Few clarification please: I will launch new instances for my datanodes but
why is the 2NN working fine? What is the issue that make the DN crashes in flame - I see in your addendum that datanodeUuid has something to do with it but when is this ID created ? I did Not all the metadata or data from Hadoop directories. Should I try? if yes, I guess it is on the namenode and then I should format it again. Right? Thanks!
... View more
02-01-2016
06:15 PM
1 Kudo
hi The Env details: I installed a hadoop 2.7.2 (not HW but pure Hadoop) multinode cluster on AWS (1 Namenode/1 2nd NN/ 3 datanodes - ubuntu 14.04). The cluster was based on the following tutorial(http://mfaizmzaki.com/2015/12/17/how-to-install-hadoop-2-7-1-multi-node-cluster-on-amazon-aws-ec2-instance-improved-part-1/) --> this means the first install (master) is copied and tuned across The Issue: The 3 data nodes individually work correctly if I configure the cluster with 1 Datanode (I specifically excluded the 2 others). As soon as I add another data node the data node booting first log a FATAL error (see extract of the log file hereafter and snapshot of the VERSION file) and stop. The data node booting second work then fine... Any idea-recommendation ? Am I doing something wrong cloning the AMI of the master on other machine? Thanks Folks! Log File INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Unsuccessfully sent block report 0x1858458671b, containing 1 storage report(s), of which we sent 0. The reports had 0 total blocks and used 0 RPC(s). This took 5 msec to generate and 35 msecs for RPC and NN processing. Got back no commands. WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1251070591-172.Y.Y.Y-1454167071207 (Datanode Uuid 54bc8b80-b84f-4893-8b96-36568acc5d4b) service to master/172.Y.Y.Y:9000 is shutting down
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.UnregisteredNodeException): Data node DatanodeRegistration(172.X.X.X:50010, datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-8e09ff25-80fb-4834-878b-f23b3deb62d0;nsid=278157295;c=0) is attempting to report storage ID 54bc8b80-b84f-4893-8b96-36568acc5d4b. Node 172.Z.Z.Z:50010 is expected to serve this storage. WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-1251070591-172.31.34.94-1454167071207 (Datanode Uuid 54bc8b80-b84f-4893-8b96-36568acc5d4b) service to master/172.Y.Y.Y:9000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-1251070591-172.Y.Y.Y-1454167071207 (Datanode Uuid 54bc8b80-b84f-4893-8b96-36568acc5d4b)
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Removing block pool BP-1251070591-172.31.34.94-1454167071207 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at HNDATA2/172.X.X.x ************************************************************/
... View more
Labels:
- Labels:
-
Apache Hadoop
01-31-2016
06:16 PM
1 Kudo
Just to summarize (and close) this issue, I would like to share how I fixed this issue. On the MASTER and the 2nd Namenode the Namenode VERSION file is under ~/.../namenode/current/VERSION. BUT for DATANODES the path is different. it should look something like this ~/.../datanode/current/VERSION ClusterIDs between the 2 VERSION files should be identical Hope it helps!
... View more
01-31-2016
02:46 PM
1 Kudo
Hi Artem, I appreciate Hortonwork perspective on the issue I shared. This is a clear product management position and from that perspective, this is a very valid point. Point taken!
... View more
01-31-2016
09:19 AM
I installed Hadoop 2.7.2 (1 master NN -1 second NN-3 datanodes) and tried to start my datanodes !!! Get stuck! After trouble shouting the logs (see below), the fatal error is due to ClusterID mismatch... easy! just change the IDs. WRONG... when I checked my VERSION files on the NameNode and the DataNodes they are identical.(see by yourself below) So the question is simple: INTO the log file --> Where the ClusterID of the NameNode is coming From???? LOG FILE:
WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException:Incompatible clusterIDs in/home/hduser/mydata/hdfs/datanode: namenode clusterID =**CID-8e09ff25-80fb-4834-878b-f23b3deb62d0**; datanode clusterID =**CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1**
org.apache.hadoop.hdfs.server.datanode.DataNode:Initialization failed forBlock pool <registering>(DatanodeUuid unassigned) service to master/172.XX.XX.XX:9000.Exiting.
java.io.IOException:All specified directories are failed to load.
atorg.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1358)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1323)
atorg.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Ending block pool service for:Block pool <registering>(DatanodeUuid unassigned) service to master/172.XX.XX.XX:9000
INFO org.apache.hadoop.hdfs.server.datanode.DataNode:RemovedBlock pool <registering>(DatanodeUuid unassigned)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode:ExitingDatanode COPY of THE VERSION FILE
the master storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56 THE DataNode storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56
... View more
Labels:
- Labels:
-
Apache Hadoop
01-29-2016
06:52 AM
Thanks Aprit for your feedbac , Thanks Aprit for your recommendation
... View more