Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

HDFS Corrupt Blocks -- NameNode stays in Safe Mode

Re: HDFS Corrupt Block -- NameNode stays in Safe Mode

Rising Star

I just 'forced' it to leave Safe Mode...

 

Re: HDFS Corrupt Block -- NameNode stays in Safe Mode

Rising Star

I was able to get HDFS back to 'normal'!

The question now is what has caused the Corrupt Block!!

What logs should I look at to show me the root cause??

Remember: I still have to go back to add new nodes to the existing cluster!!

 

Highlighted

Re: HDFS Corrupt Block -- NameNode stays in Safe Mode

Rising Star

It is VERY important to me to understand the ROOT CAUSE of corrupt blocks!

WHERE should I look to see the reporting explanation that caused the blocks to be corrupted!

 

What I did what just to add new DataNodes in the existing cluster and I need to do it again.

 

Re: HDFS Corrupt Block -- NameNode stays in Safe Mode

New Contributor

I am also facing the same issue. After the addition of a data node somehow the HDFS is showing no space available (perhaps the blocks are corrupted).

 

I am not able to come out of safe mode too.


how did you cleanup the files to create some space ? Please elaborate.

Re: HDFS Corrupt Block -- NameNode stays in Safe Mode

New Contributor

I had the same issue of corrupted blocks. Resolved it by deleting them. 

 

Thanks 


@phxby wrote:

I think you got hit with https://issues.apache.org/jira/browse/HDFS-7281

(missing block marked as corrupted block)

 

For the file that is missing, do:

hdfs dfs -ls /accumulo/tables/!0/table_info/

See what is the replication factor which is shown on the 2nd column of the output above.

If the replication factor > 3, then you should have a block somewhere.

 

Get the list of some of the missing block, then on your data node, do

find /<path_to_the_data_directory> -type f | grep <missing block>

Eg:

find /<path_to_data_directory> -type f | grep 'BP-2034730372-10.15.230.22-1428441473000'

 

See if the block is there or not.



@phxby wrote:

I think you got hit with https://issues.apache.org/jira/browse/HDFS-7281

(missing block marked as corrupted block)

 

For the file that is missing, do:

hdfs dfs -ls /accumulo/tables/!0/table_info/

See what is the replication factor which is shown on the 2nd column of the output above.

If the replication factor > 3, then you should have a block somewhere.

 

Get the list of some of the missing block, then on your data node, do

find /<path_to_the_data_directory> -type f | grep <missing block>

Eg:

find /<path_to_data_directory> -type f | grep 'BP-2034730372-10.15.230.22-1428441473000'

 

See if the block is there or not.




Don't have an account?
Coming from Hortonworks? Activate your account here