<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question HDFS issue in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HDFS-issue/m-p/32303#M7553</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;when I run fsck command it shows total blocks to&amp;nbsp;be&amp;nbsp;&lt;SPAN&gt;68 (avg. block size 286572 B). How can I have only 68 blocks?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@cluster1 ~]$ hdfs fsck /&lt;/P&gt;&lt;P&gt;Connecting to namenode via &lt;A href="http://cluster1.abc:50070" target="_blank"&gt;http://cluster1.abc:50070&lt;/A&gt;&lt;/P&gt;&lt;P&gt;FSCK started by hdfs (auth:SIMPLE) from /192.168.101.241 for path / at Fri Sep 25 09:51:56 EDT 2015&lt;/P&gt;&lt;P&gt;....................................................................Status: HEALTHY&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total size: 19486905 B&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total dirs: 569&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total files: 68&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total symlinks: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total blocks (validated): 68 (avg. block size 286572 B)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Minimally replicated blocks: 68 (100.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Over-replicated blocks: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Under-replicated blocks: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Mis-replicated blocks: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Default replication factor: 3&lt;/P&gt;&lt;P&gt;&amp;nbsp;Average block replication: 1.9411764&lt;/P&gt;&lt;P&gt;&amp;nbsp;Corrupt blocks: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;Missing replicas: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Number of data-nodes: 3&lt;/P&gt;&lt;P&gt;&amp;nbsp;Number of racks: 1&lt;/P&gt;&lt;P&gt;FSCK ended at Fri Sep 25 09:51:56 EDT 2015 in 41 milliseconds&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The filesystem under path '/' is HEALTHY&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is what I get when I run hdfsadmin -repot command:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@cluster1 ~]$ hdfs dfsadmin -report&lt;/P&gt;&lt;P&gt;Configured Capacity: 5715220577895 (5.20 TB)&lt;/P&gt;&lt;P&gt;Present Capacity: 5439327449088 (4.95 TB)&lt;/P&gt;&lt;P&gt;DFS Remaining: 5439303270400 (4.95 TB)&lt;/P&gt;&lt;P&gt;DFS Used: 24178688 (23.06 MB)&lt;/P&gt;&lt;P&gt;DFS Used%: 0.00%&lt;/P&gt;&lt;P&gt;Under replicated blocks: 0&lt;/P&gt;&lt;P&gt;Blocks with corrupt replicas: 0&lt;/P&gt;&lt;P&gt;Missing blocks: 0&lt;/P&gt;&lt;P&gt;Missing blocks (with replication factor 1): 504&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, when I run hive job, it does not go beyond "Running job: job_1443147339086_0002". Could it be related?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any suggestion?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 09:41:51 GMT</pubDate>
    <dc:creator>rio</dc:creator>
    <dc:date>2022-09-16T09:41:51Z</dc:date>
    <item>
      <title>HDFS issue</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HDFS-issue/m-p/32303#M7553</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;when I run fsck command it shows total blocks to&amp;nbsp;be&amp;nbsp;&lt;SPAN&gt;68 (avg. block size 286572 B). How can I have only 68 blocks?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@cluster1 ~]$ hdfs fsck /&lt;/P&gt;&lt;P&gt;Connecting to namenode via &lt;A href="http://cluster1.abc:50070" target="_blank"&gt;http://cluster1.abc:50070&lt;/A&gt;&lt;/P&gt;&lt;P&gt;FSCK started by hdfs (auth:SIMPLE) from /192.168.101.241 for path / at Fri Sep 25 09:51:56 EDT 2015&lt;/P&gt;&lt;P&gt;....................................................................Status: HEALTHY&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total size: 19486905 B&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total dirs: 569&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total files: 68&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total symlinks: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;Total blocks (validated): 68 (avg. block size 286572 B)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Minimally replicated blocks: 68 (100.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Over-replicated blocks: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Under-replicated blocks: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Mis-replicated blocks: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Default replication factor: 3&lt;/P&gt;&lt;P&gt;&amp;nbsp;Average block replication: 1.9411764&lt;/P&gt;&lt;P&gt;&amp;nbsp;Corrupt blocks: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;Missing replicas: 0 (0.0 %)&lt;/P&gt;&lt;P&gt;&amp;nbsp;Number of data-nodes: 3&lt;/P&gt;&lt;P&gt;&amp;nbsp;Number of racks: 1&lt;/P&gt;&lt;P&gt;FSCK ended at Fri Sep 25 09:51:56 EDT 2015 in 41 milliseconds&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The filesystem under path '/' is HEALTHY&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is what I get when I run hdfsadmin -repot command:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@cluster1 ~]$ hdfs dfsadmin -report&lt;/P&gt;&lt;P&gt;Configured Capacity: 5715220577895 (5.20 TB)&lt;/P&gt;&lt;P&gt;Present Capacity: 5439327449088 (4.95 TB)&lt;/P&gt;&lt;P&gt;DFS Remaining: 5439303270400 (4.95 TB)&lt;/P&gt;&lt;P&gt;DFS Used: 24178688 (23.06 MB)&lt;/P&gt;&lt;P&gt;DFS Used%: 0.00%&lt;/P&gt;&lt;P&gt;Under replicated blocks: 0&lt;/P&gt;&lt;P&gt;Blocks with corrupt replicas: 0&lt;/P&gt;&lt;P&gt;Missing blocks: 0&lt;/P&gt;&lt;P&gt;Missing blocks (with replication factor 1): 504&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, when I run hive job, it does not go beyond "Running job: job_1443147339086_0002". Could it be related?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any suggestion?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:41:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HDFS-issue/m-p/32303#M7553</guid>
      <dc:creator>rio</dc:creator>
      <dc:date>2022-09-16T09:41:51Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS issue</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HDFS-issue/m-p/32350#M7554</link>
      <description>&amp;gt; How can I have only 68 blocks?&lt;BR /&gt;&lt;BR /&gt;That depends on how much data your HDFS is carrying. Is the number much less than expected, and not match the output of 'hadoop fs -ls -R /' list of all files?&lt;BR /&gt;&lt;BR /&gt;The space report says only about 23 MB used by HDFS, so the number of blocks look OK to me.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Also, when I run hive job, it does not go beyond "Running job: job_1443147339086_0002". Could it be related?&lt;BR /&gt;&lt;BR /&gt;This would be unrelated, but to resolve the issue consider raising the values under YARN -&amp;gt; Configuration -&amp;gt; Container Memory (NodeManager) and Container Virtual CPUs (NodeManager)</description>
      <pubDate>Sun, 27 Sep 2015 14:13:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HDFS-issue/m-p/32350#M7554</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2015-09-27T14:13:07Z</dc:date>
    </item>
  </channel>
</rss>

