- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Percentage under replicated blocks: 100.00%
- Labels:
-
Cloudera Manager
-
HDFS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Brand new to Cloudera I have used Cloudera manager to install on a single node for trial purposes. After install there is a helth warning on HDFS -
Under-Replicated Blocks:
283 under replicated blocks in the cluster. 283 total blocks in the cluster. Percentage under replicated blocks: 100.00%. Critical threshold: 40.00%.
Not sure what could cause this I have Googled around and not really found anything.
Created 04-01-2015 03:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
On single-node clusters, you should reduce the replication factor to 1 in HDFS configuration in Cloudera Manager. Note that this only affects newly created files. Your existing files will still try to replicate to 3 hosts unless you change them explicitly (you should be able to google how to change a file's replication factor).
Thanks,
Darren
Created 04-01-2015 03:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
On single-node clusters, you should reduce the replication factor to 1 in HDFS configuration in Cloudera Manager. Note that this only affects newly created files. Your existing files will still try to replicate to 3 hosts unless you change them explicitly (you should be able to google how to change a file's replication factor).
Thanks,
Darren
Created 07-18-2018 03:14 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had a 3 node cluster, in that node#1 has both DN + NN and node#2 & node#3 has only DN.
Now Node#2 & Node#3 were down. And i had free RAM and free disk space available in Node#1.
Ideally Hadoop should replicate the blocks from dead nodes i.e. Node#2&3 and should be recreated in Node#1 itself to maintain replication factor 3. But it is not happening. I had waited for hours, but not seeing any progress and showing the same health issues in cloudera manager.
Note: The same behaviour is also observed when only Node#3 is down and Node#1 & Node#2 are up and running.
Can someone please respond why this is happening?
Created 07-18-2018 03:43 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Manindar,
The replication data just happens on other dataNodes, hadoop does not replicate in the same host.
If you only have one node Up, you only will hava one copy of data.
Can you post your nameNode log?? Thanks.
Regards,
Manu.
Created 07-18-2018 02:21 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Indeed, if you have a replication factor of 3 and only one DataNode is alive, then there is nowhere to replicate. 3 nodes with a replication factor of 3 means the blocks are already on that one node and there is nothing to replicate/move.
Created 11-02-2015 02:33 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please use below command to chang ethe replication factor for existing data on Hadoop FS
$ hadoop fs -setrep -R -w 2 /
Note: You may mention number according to your requirement to have the replication factor.
$ hdfs fsck / -delete
Created 07-08-2016 11:26 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After running , hadoop fs -setrep -R -w 2, the error is fixed.
Can you please let us know what's the purpose of hdfs fsck / -delete command? It will delete under-replicated or corrupted blocks?
Created 10-25-2017 05:18 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
Yes - from documentation (https://hadoop.apache.org/docs/r1.2.1/commands_manual.html#fsck😞
fsck
Runs a HDFS filesystem checking utility.
COMMAND_OPTION | Description |
-delete | Delete corrupted files. |
