Member since
12-09-2015
97
Posts
51
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1513 | 05-04-2016 06:00 AM | |
3257 | 04-11-2016 09:57 AM | |
1010 | 04-08-2016 11:30 AM |
03-17-2016
01:32 PM
1 Kudo
Ok. I have fixed this issue. I had to change the owner of the folder /hadoop/hdfs/namenode/ to hdfs user and hdfs group. I executed the following and everything is back to normal. chown -R hdfs:hdfs /hadoop/hdfs/namenode
... View more
03-17-2016
12:07 PM
Today I observed that the NameNode was in red. I tried to restart the server, but I found the following errors. As per the suggestions mentioned in various threads, I formatted the NameNode, by using the command "hadoop
namenode -format". I was logged in as 'root', when I did it. When I saw this error, I reformatted the namenode as 'hdfs' user, but I still see the followign errors. Can anyone help me understand what is going wrong. 2016-03-17 17:27:02,305 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(683)) - Encountered exception loading fsimage
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
... View more
Labels:
- Labels:
-
Apache Ambari
03-09-2016
09:01 AM
2 Kudos
I am posting this so that it will be helpful for those users who are looking towards understanding how DFS capacity could be increased. I am providing the details in steps below. 1) The
section "HDFS Disk usage" (a box) on the dashboard, shows the current DFS usage. However, the total DFS capacity is not shown
here. 2) To view the total capacity use Name Node Web UI eg. (http://172.26.180.6:50070/). This will show you the total DFS capacity. 3) It is helpful to see the file system information by executing "df -h", which tells you the size of the file system. In my case the root file system had very less space allocated (50 GB) to it as compared to file system mounted on /home (750 GB). 4) The straight forward way to increase the DFS capacity is mention additional folder in the "DataNode directories" field under HDFS -> Configs -> Settings tab, as a comma separated value. This new folder should exist in a file system that has more disk capacity. 5) Ambari for some reason does not accept /home as the folder name for storing file blocks. By default it shows "/hadoop/hdfs/data. You cannot delete it completely to replace it with new folder path. 6) The best way is
to create a new mount point and point it to a folder in the /home. Therefore create a mount point eg. Hdfsdata and then
point it to a folder under home, eg. /home/hdfsdata. Following are the steps to create a new mount point:
Create a folder in the root
eg. /hdfsdata
Create a folder under home eg. /home/hdfsdata
Provide permission to 'hdfs'
user to this folder: chown hdfs:hadoop -R /home/hdfsdata
Provide file/folder
permissions to this folder: chmod 777 -R /home/hdfsdata.
Mount this new folder mount
--bind /home/hdfsdata/ /hdfsdata/ After the above steps, restart the HDFS service and you have your capacity increased.
... View more
03-09-2016
08:49 AM
@Artem Ervits: Okay. I have finally got what I wanted and I have increased the DFS capacity. Thanks for your help. I learned a lot through this exercise :). I am accepting your answer and also providing steps that I followed in another answer post, so that it will be helpful to other users.
... View more
03-08-2016
12:55 PM
@Neeraj Sabharwal I deleted my previous comment as it didn't make any sense. What I currently don't understand is that, the "DataNode directories" show /hadoop/hdfs/data. I am not able to change this. If I edit the field to remove this folder name, the "save" button gets disabled. It is not taking /home folder as a valid folder. The /home mount has the maximum space, and I am not able to mention this in the "DataNode directories" field. Any ideas?. Thanks.
... View more
03-08-2016
11:34 AM
@Artem Ervits: I am having many issues now. 1) Ambari doesn't allow me to remove folder name "/hadoop/hdfs/data". So I cannot completely replace it with a new folder. 2) If I give /hadoop/hdfs/data,/home then it shows me error Can't start with "home(s)". I am pretty sure something is wrong.
... View more
03-08-2016
11:04 AM
@Artem Ervits I found "Data Node Directories" under "Data Node" section under "Settings" tab. The "Data Node Directories" has the folder name /hadoop/hdfs/data. However, when I do df -h, I do not see this folder in the mount information. Following is the output of my the df -h on the master server: Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_item70288-lv_root 50G 41G 6.2G 87% / tmpfs 3.8G 0 3.8G 0% /dev/shm /dev/sda1 477M 67M 385M 15% /boot /dev/mapper/vg_item70288-lv_home 172G 21G 143G 13% /home
... View more
03-08-2016
10:54 AM
@vinay kumar Can you help me understand where did you find the 'dfs.datanode.data.dir' property?. In my Ambari Installation, I did not find this property under 'Advanced hdfs-site' configuration.
... View more
03-07-2016
01:56 PM
@Artem Ervits. Thanks, but I still could not find this property under "Advnce hdfs-site" section. I was reading the link provided by Neeraj Sabharwal, in his answer below, which also talks about mentioning /hadoop as the folder in the property 'dfs.datanode.data.dir'. But, like I said, I could not find this property.
... View more
03-07-2016
01:41 PM
@Artem Ervits Can you please elaborate what you what you mean by "right spending of version of ambari". I checked "Advanced hdfs-site" section, but I dont see any "dfs.datanode.data.dir"
... View more