<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: under-replicated blocks + why we get this warning on new scratch installtion? in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281406#M209358</link>
    <description>&lt;P&gt;Dear Shelton&lt;/P&gt;&lt;P&gt;this are the results that we get from&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;hdfs fsck / -storagepolicies&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;FSCK started by hdfs (auth:SIMPLE) from /192.9.200.217 for path / at Sun Oct 27 05:49:31 UTC 2019&lt;BR /&gt;..................&lt;BR /&gt;/hdp/apps/2.6.4.0-91/hive/hive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741831&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/hive/hive.tar.gz: MISSING 1 blocks of total size 106475099 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741834&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: MISSING 1 blocks of total size 105758 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741825&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741826&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: MISSING 2 blocks of total size 212360343 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741829&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741830&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/pig/pig.tar.gz: MISSING 2 blocks of total size 135018554 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/slider/slider.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741828&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/slider/slider.tar.gz: MISSING 1 blocks of total size 47696340 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741832&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741833&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: MISSING 2 blocks of total size 189992674 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/tez/tez.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741827&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/tez/tez.tar.gz: MISSING 1 blocks of total size 53236968 B......&lt;BR /&gt;/user/ambari-qa/.staging/job_1571958926657_0001/job.jar: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741864_1131. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).&lt;BR /&gt;.&lt;BR /&gt;/user/ambari-qa/.staging/job_1571958926657_0001/job.split: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741865_1132. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).&lt;BR /&gt;...Status: CORRUPT&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yes we check the replication factor - yes its 3&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;based on that results , can we just delete the corrupted blocks?&lt;/P&gt;</description>
    <pubDate>Sun, 27 Oct 2019 06:27:40 GMT</pubDate>
    <dc:creator>mike_bronson7</dc:creator>
    <dc:date>2019-10-27T06:27:40Z</dc:date>
    <item>
      <title>under-replicated blocks + why we get this warning on new scratch installtion?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281401#M209353</link>
      <description>&lt;P&gt;We installed new ambari cluster with the following details ( we moved to redhat 7.5 instead 7.2 )&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Redhat – 7.5&lt;BR /&gt;HDP version – 2.6.4&lt;BR /&gt;Ambari – 2.6.2&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After we complete the installation , we notice about very strange behavior ( please note that this is new cluster )&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On HDFS status summary, I see the following messages about under-replicated blocks&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We see under replicated blocks is 12 ( while its should be 0 on new installation )&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Any suggestion – why this ?&lt;/P&gt;
&lt;P&gt;I just want to say that this behavior not appears on redhat 7.2&lt;/P&gt;</description>
      <pubDate>Sat, 26 Oct 2019 20:20:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281401#M209353</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2019-10-26T20:20:50Z</dc:date>
    </item>
    <item>
      <title>Re: under-replicated blocks + why we get this warning on new scratch installtion?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281403#M209355</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/59349"&gt;@mike_bronson7&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Under replicated blocks&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;There are a couple of potential source of the problem that triggers this alert! The HDP versions earlier than HDP 3.x all use the standard default 3 replication factor for reasons you know well , the ability to rebuild the data in whatever case as opposed to the new Erasure coding policies in Hadoop 3.0.&lt;/P&gt;&lt;P&gt;Secondly, the cluster will rebalance itself if you gave it time &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Having said that the first question is how many data nodes were set up in this new cluster and did you &lt;A href="https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.0/data-storage/content/improving_performance_with_hdfs_rack_awareness.html" target="_blank" rel="noopener"&gt;enable rack awareness?&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;This usually means that some files are “asking” for a specific number of target replicas that are not present or not being able to get the replica. So the question is, how i know which files are asking for a number of replicas that are not available?&lt;/P&gt;&lt;P&gt;The first option is use hdfs fsck:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;$ hdfs fsck / -storagepolicies&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;****** **************output *********************&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;Connecting to namenode via &lt;A href="http://xxx.com:50070/fsck?ugi=hdfs&amp;amp;storagepolicies=1&amp;amp;path=%2F" target="_blank"&gt;http://xxx.com:50070/fsck?ugi=hdfs&amp;amp;storagepolicies=1&amp;amp;path=%2F&lt;/A&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;FSCK started by hdfs (auth:SIMPLE) from /192.168.0.94 for path / at Sat Oct 26 23:03:24 CEST 2019&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;/user/zeppelin/notebook/2EC24FF9U/note.json: &lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;Under replicated BP-2067995211-192.168.0.101-1537740712051:blk_1073751507_10767. &lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;Target Replicas is &lt;STRONG&gt;&lt;FONT color="#000000"&gt;3&lt;/FONT&gt;&lt;/STRONG&gt; but found &lt;STRONG&gt;&lt;FONT color="#000000"&gt;1&lt;/FONT&gt;&lt;/STRONG&gt; live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;******&lt;/P&gt;&lt;P&gt;Change the replication&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;$ hdfs dfs -setrep -w 1 /user/zeppelin/notebook/2EC24FF9U/note.json&lt;/STRONG&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;Replication 1 set: /user/zeppelin/notebook/2EC24FF9U/note.json&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;Waiting for /user/zeppelin/notebook/2EC24FF9U/note.json ... done&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;You also need to check &lt;STRONG&gt;dfs.replication&lt;/STRONG&gt; in &lt;STRONG&gt;hdfs-site.xml&lt;/STRONG&gt; the default is configured to be &lt;STRONG&gt;3&lt;/STRONG&gt;. Note that it turns out that if you upload files through Ambari, the file actually gets the replication factor of 3.&lt;BR /&gt;&lt;BR /&gt;HTH&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 26 Oct 2019 22:22:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281403#M209355</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2019-10-26T22:22:12Z</dc:date>
    </item>
    <item>
      <title>Re: under-replicated blocks + why we get this warning on new scratch installtion?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281406#M209358</link>
      <description>&lt;P&gt;Dear Shelton&lt;/P&gt;&lt;P&gt;this are the results that we get from&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;hdfs fsck / -storagepolicies&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;FSCK started by hdfs (auth:SIMPLE) from /192.9.200.217 for path / at Sun Oct 27 05:49:31 UTC 2019&lt;BR /&gt;..................&lt;BR /&gt;/hdp/apps/2.6.4.0-91/hive/hive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741831&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/hive/hive.tar.gz: MISSING 1 blocks of total size 106475099 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741834&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: MISSING 1 blocks of total size 105758 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741825&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741826&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: MISSING 2 blocks of total size 212360343 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741829&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741830&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/pig/pig.tar.gz: MISSING 2 blocks of total size 135018554 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/slider/slider.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741828&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/slider/slider.tar.gz: MISSING 1 blocks of total size 47696340 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741832&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741833&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: MISSING 2 blocks of total size 189992674 B..&lt;BR /&gt;/hdp/apps/2.6.4.0-91/tez/tez.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741827&lt;/P&gt;&lt;P&gt;/hdp/apps/2.6.4.0-91/tez/tez.tar.gz: MISSING 1 blocks of total size 53236968 B......&lt;BR /&gt;/user/ambari-qa/.staging/job_1571958926657_0001/job.jar: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741864_1131. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).&lt;BR /&gt;.&lt;BR /&gt;/user/ambari-qa/.staging/job_1571958926657_0001/job.split: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741865_1132. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).&lt;BR /&gt;...Status: CORRUPT&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yes we check the replication factor - yes its 3&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;based on that results , can we just delete the corrupted blocks?&lt;/P&gt;</description>
      <pubDate>Sun, 27 Oct 2019 06:27:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281406#M209358</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2019-10-27T06:27:40Z</dc:date>
    </item>
    <item>
      <title>Re: under-replicated blocks + why we get this warning on new scratch installtion?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281411#M209360</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/59349"&gt;@mike_bronson7&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regarding under replicated blocks, HDFS is supposed to recover them automatically (by creating missing copies to fulfill the replication factor) but in your case, your cluster-wide replication factor is &lt;STRONG&gt;3&lt;/STRONG&gt; but the target is &lt;STRONG&gt;10&lt;/STRONG&gt; It's suggesting have 5 data nodes while there are 10 replicas leading to the under replication alert!&lt;/P&gt;&lt;P&gt;According to the output you have 2 distinct problems&lt;BR /&gt;(a) Under replicated blocks, Target Replicas is 10 but found 5 live replica(s) [Last 2 lines]&lt;BR /&gt;(b) Corrupt blocks with 2 different solutions&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Solution 1 under replicated&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;You could force the 2 blk to align with cluster-wide replication factor by adjusting using -setrep&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#FF6600"&gt;$ hdfs dfs -setrep -w 3 [File_name]&lt;/FONT&gt;&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;Validate by&lt;/P&gt;&lt;P&gt;Now you should see &lt;STRONG&gt;3&lt;/STRONG&gt; after the file permissions before the user:group like below&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;$ hdfs dfs -ls [File_name]&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;FONT color="#FF6600"&gt;-rw-r--r--&lt;/FONT&gt; &lt;STRONG&gt;3&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&lt;FONT color="#FF6600"&gt;analyst hdfs 1068028 2019-10-27 12:30 /flighdata/airports.dat&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;And wait for the deletion to happen or run the below snippets sequentially&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;$ hdfs fsck / | grep 'Under replicated'&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' &amp;gt;&amp;gt; /tmp/under_replicated_files&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;$ for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ; hadoop fs -setrep 3 $hdfsfile; done&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;For Corrupt files&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;$ hdfs fsck / | egrep -v '^\.+$' | grep -i corrupt&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;...............Example output............................&lt;BR /&gt;/user/analyst/test9: CORRUPT blockpool BP-762603225-192.168.1.2-1480061879099 block blk_1055741378&lt;BR /&gt;/user/analyst/data1: CORRUPT blockpool BP-762603225-192.168.1.2-1480061879099 block blk_1056741378&lt;BR /&gt;/user/analyst/data2: MISSING 3 blocks of total size 338192920 B.Status: CORRUPT&lt;BR /&gt;CORRUPT FILES: 9&lt;BR /&gt;CORRUPT BLOCKS: 18&lt;BR /&gt;Corrupt blocks: 18&lt;BR /&gt;The filesystem under path '/' is CORRUPT&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Locate corrupted block&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;$ hdfs fsck / | egrep -v '^\.+$' | grep -i "corrupt blockpool"| awk '{print $1}' |sort |uniq |sed -e 's/://g' &amp;gt;corrupted.flst&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Get the location in the above output corrupted.flst&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;$ hdfs fsck /user/analyst/xxxx -locations -blocks -files&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;Remove the corrupted files&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;hdfs dfs -rm /path/to/corrupted.flst&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Skip the trash to permanently delete&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF6600"&gt;$ hdfs dfs -rm -skipTrash /path/to/corrupt_filename.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You should give the cluster sometime to rebalance in the case of under-replicated files.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 27 Oct 2019 08:36:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281411#M209360</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2019-10-27T08:36:14Z</dc:date>
    </item>
    <item>
      <title>Re: under-replicated blocks + why we get this warning on new scratch installtion?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281412#M209361</link>
      <description>&lt;P&gt;about the corrupted file&amp;nbsp;&lt;/P&gt;&lt;P&gt;why just not use the following?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;hdfs fsck / -delete&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 27 Oct 2019 09:28:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281412#M209361</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2019-10-27T09:28:54Z</dc:date>
    </item>
    <item>
      <title>Re: under-replicated blocks + why we get this warning on new scratch installtion?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281414#M209363</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/59349"&gt;@mike_bronson7&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Surely you can use that&amp;nbsp;&lt;STRONG&gt;hdfs fsck / -delete &lt;/STRONG&gt;but remember it will be put in the trash !!!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 27 Oct 2019 10:40:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281414#M209363</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2019-10-27T10:40:39Z</dc:date>
    </item>
    <item>
      <title>Re: under-replicated blocks + why we get this warning on new scratch installtion?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281415#M209364</link>
      <description>&lt;P&gt;may I return to my first question&lt;/P&gt;&lt;P&gt;until using redhat 7.2 , every thing was ok , after each scratch installation we never seen that&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;but when we jump to redhat 7.5&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;then every cluster that created was with corrupted files - any HINT - why ?&lt;/P&gt;</description>
      <pubDate>Sun, 27 Oct 2019 11:02:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/under-replicated-blocks-why-we-get-this-warning-on-new/m-p/281415#M209364</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2019-10-27T11:02:09Z</dc:date>
    </item>
  </channel>
</rss>

