Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDFS issue

avatar
Explorer

Hi,

 

when I run fsck command it shows total blocks to be 68 (avg. block size 286572 B). How can I have only 68 blocks?

 

[hdfs@cluster1 ~]$ hdfs fsck /

Connecting to namenode via http://cluster1.abc:50070

FSCK started by hdfs (auth:SIMPLE) from /192.168.101.241 for path / at Fri Sep 25 09:51:56 EDT 2015

....................................................................Status: HEALTHY

 Total size: 19486905 B

 Total dirs: 569

 Total files: 68

 Total symlinks: 0

 Total blocks (validated): 68 (avg. block size 286572 B)

 Minimally replicated blocks: 68 (100.0 %)

 Over-replicated blocks: 0 (0.0 %)

 Under-replicated blocks: 0 (0.0 %)

 Mis-replicated blocks: 0 (0.0 %)

 Default replication factor: 3

 Average block replication: 1.9411764

 Corrupt blocks: 0

 Missing replicas: 0 (0.0 %)

 Number of data-nodes: 3

 Number of racks: 1

FSCK ended at Fri Sep 25 09:51:56 EDT 2015 in 41 milliseconds

 

 

The filesystem under path '/' is HEALTHY

 

-

 

This is what I get when I run hdfsadmin -repot command:

 

[hdfs@cluster1 ~]$ hdfs dfsadmin -report

Configured Capacity: 5715220577895 (5.20 TB)

Present Capacity: 5439327449088 (4.95 TB)

DFS Remaining: 5439303270400 (4.95 TB)

DFS Used: 24178688 (23.06 MB)

DFS Used%: 0.00%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

Missing blocks (with replication factor 1): 504

 

-

 

Also, when I run hive job, it does not go beyond "Running job: job_1443147339086_0002". Could it be related?

 

Any suggestion?

 

Thank you!

1 ACCEPTED SOLUTION

avatar
Mentor
> How can I have only 68 blocks?

That depends on how much data your HDFS is carrying. Is the number much less than expected, and not match the output of 'hadoop fs -ls -R /' list of all files?

The space report says only about 23 MB used by HDFS, so the number of blocks look OK to me.

> Also, when I run hive job, it does not go beyond "Running job: job_1443147339086_0002". Could it be related?

This would be unrelated, but to resolve the issue consider raising the values under YARN -> Configuration -> Container Memory (NodeManager) and Container Virtual CPUs (NodeManager)

View solution in original post

1 REPLY 1

avatar
Mentor
> How can I have only 68 blocks?

That depends on how much data your HDFS is carrying. Is the number much less than expected, and not match the output of 'hadoop fs -ls -R /' list of all files?

The space report says only about 23 MB used by HDFS, so the number of blocks look OK to me.

> Also, when I run hive job, it does not go beyond "Running job: job_1443147339086_0002". Could it be related?

This would be unrelated, but to resolve the issue consider raising the values under YARN -> Configuration -> Container Memory (NodeManager) and Container Virtual CPUs (NodeManager)