Member since
02-04-2019
3
Posts
0
Kudos Received
0
Solutions
02-05-2019
10:31 AM
@Geoffrey Shelton Okot But Geoffrey Whatever you are suggesting, that is already exist on respective server. Like you are saying, create datadrv2 on 50 and 51. But as i mentioned earlier, it is already present. Please see this again: on .50 : /datadrv1, /datadrv2, /datadrv3 on 51: /datadrv1, /datadrv2 on 52 : /data1 on 53 : /datadrv1 on 54 : /data on 55 : /data1 These directories are already there on respective servers. So when the data is coming to HDFS, it automatically adding data to mentioned path (In HDFS Config) {/datadrv1/hadoop/hdfs/data,/data1/hadoop/hdfs/data} But if it does not get the path, suppose /datadrv1 is not there on 52, So it is creating that directory /datadrv1 on root and putting data there. So thats why root space is getting full. Because data should be going to mentioned directories as it supposed to be. But it is not going there. And same with other servers too. Did you get my error now?
... View more
02-05-2019
09:14 AM
@geoffrey Shelton okot As you have suggested, to create mount points. But where to create it, because it is present on other datanodes. Are you asking to create the mount points on every datanode present in cluster? Its like on one server there are three mount points, on second there are 2, on third there is only one and so on like this. What is happening is, whenever data is coming to hdfs, if it does not get that path which is in config file then it is creating that directory on root. If i mention all mount points which I have on different different server, then won't it be create duplicate datablock? Means one block is written twice as I have mentioned every mount points in dfs.datanode.data.dir.
... View more
02-04-2019
06:49 PM
Hi All, I have 6 datanodes in my cluster. Those 6 servers have different mount points. on .50 : /datadrv1, /datadrv2, /datadrv3 on 51: /datadrv1, /datadrv2 on 52 : /data1 on 53 : /datadrv1 on 54 : /data on 55 : /data1 And in HDFS config, datanode specified as /datadrv1/hadoop/hdfs/data and /data1/hadoop/hdfs/data. And due to this differents mount points data is going on root as hadoop is not getting exact path on servers. So it is creating the directories which are not present on that server under root. So my question is, does HDFS grouping solve this problem? If yes please provide me the steps. PS: Don't provide the document link. If you are attaching the document then please tell me exact steps to follow. Thanks and Regards.
... View more
Labels:
- Labels:
-
Apache Hadoop