<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: datanode + Directory is not writable in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225598#M77560</link>
    <description>&lt;P&gt;&lt;EM&gt;&lt;A href="@Michael Bronson"&gt;@Michael Bronson&lt;/A&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Could you try umounting and mount that disk? Your disk could have gone bac and the FS is in Read-Only mode Can you also set the failure tolerance to 1 &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Using &lt;B&gt;Ambari UI--&amp;gt; HDFS--&amp;gt;Configs---Filter&lt;/B&gt; in for property "&lt;STRONG&gt;d&lt;/STRONG&gt;&lt;B&gt;fs&lt;/B&gt;&lt;B&gt;.datanode.failed.volumes.tolerated&lt;/B&gt;" set it to &lt;B&gt;1&lt;/B&gt; &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Restart stale HDFS services &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;All should be in order &lt;/EM&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 24 Apr 2018 00:58:14 GMT</pubDate>
    <dc:creator>Shelton</dc:creator>
    <dc:date>2018-04-24T00:58:14Z</dc:date>
    <item>
      <title>datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225597#M77559</link>
      <description>&lt;P&gt;&lt;STRONG&gt;we have ambari cluster HDP version 2.6.0.1&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;we have issues on &lt;STRONG&gt;worker02&lt;/STRONG&gt; according to the log - &lt;STRONG&gt;hadoop-hdfs-datanode-worker02.sys65.com.log&lt;/STRONG&gt;,&lt;/P&gt;&lt;PRE&gt;2018-04-21 09:02:53,405 WARN  checker.StorageLocationChecker (StorageLocationChecker.java:check(208)) - Exception checking StorageLocation [DISK]file:/grid/sdc/hadoop/hdfs/data/
org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not writable: /grid/sdc/hadoop/hdfs/data&lt;/PRE&gt;&lt;P&gt;note - from ambari GUI we can see that Data-node on &lt;STRONG&gt;worker02&lt;/STRONG&gt; is down &lt;/P&gt;&lt;P&gt;we can see from the log  - &lt;EM&gt;&lt;STRONG&gt;Directory is not writable: /grid/sdc/hadoop/hdfs/data &lt;/STRONG&gt;the follwing:&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;STARTUP_MSG: Starting DataNode
STARTUP_MSG:   user = hdfs
STARTUP_MSG:   host = worker02.sys65.com/23.87.23.126
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.3.2.6.0.3-8
STARTUP_MSG:   build = git@github.com:hortonworks/hadoop.git -r c6befa0f1e911140cc815e0bab744a6517abddae; compiled by 'jenkins' on 2017-04-01T21:32Z
STARTUP_MSG:   java = 1.8.0_112
************************************************************/
2018-04-21 09:02:52,854 INFO  datanode.DataNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT]
2018-04-21 09:02:53,321 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdb/hadoop/hdfs/data/
2018-04-21 09:02:53,330 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdc/hadoop/hdfs/data/
2018-04-21 09:02:53,330 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdd/hadoop/hdfs/data/
2018-04-21 09:02:53,331 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sde/hadoop/hdfs/data/
2018-04-21 09:02:53,331 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdf/hadoop/hdfs/data/
2018-04-21 09:02:53,405 WARN  checker.StorageLocationChecker (StorageLocationChecker.java:check(208)) - Exception checking StorageLocation [DISK]file:/grid/sdc/hadoop/hdfs/data/
org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not writable: /grid/sdc/hadoop/hdfs/data
	at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:124)
	at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:99)
	at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:128)
	at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:44)
	at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:127)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
2018-04-21 09:02:53,410 ERROR datanode.DataNode (DataNode.java:secureMain(2691)) - Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 4, volumes configured: 5, volumes failed: 1, volume failures tolerated: 0
	at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:216)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2583)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2492)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2684)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2708)
2018-04-21 09:02:53,411 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2018-04-21 09:02:53,414 INFO  datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at worker02.sys65.com/23.87.23.126
************************************************************/
&lt;/PRE&gt;&lt;PRE&gt;&amp;lt;br&amp;gt;&lt;/PRE&gt;&lt;P&gt;we checked that:&lt;/P&gt;&lt;P&gt;1. all files and folders  under - /grid/sdc/hadoop/hdfs/ are with &lt;STRONG&gt;hdfs:hadoop   ,&lt;EM&gt; and that is OK&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;2. disk - &lt;STRONG&gt;sdc&lt;/STRONG&gt; is read and write (&lt;STRONG&gt;rw&lt;/STRONG&gt;,noatime,data=ordered) &lt;B&gt;, &lt;EM&gt;and that is OK&lt;/EM&gt;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;we suspect that  Hard Disk has gone bad , in this case how we check that?&lt;/P&gt;&lt;P&gt;please advice what the other options to resolve this issue ? &lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 00:09:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225597#M77559</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-24T00:09:34Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225598#M77560</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="@Michael Bronson"&gt;@Michael Bronson&lt;/A&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Could you try umounting and mount that disk? Your disk could have gone bac and the FS is in Read-Only mode Can you also set the failure tolerance to 1 &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Using &lt;B&gt;Ambari UI--&amp;gt; HDFS--&amp;gt;Configs---Filter&lt;/B&gt; in for property "&lt;STRONG&gt;d&lt;/STRONG&gt;&lt;B&gt;fs&lt;/B&gt;&lt;B&gt;.datanode.failed.volumes.tolerated&lt;/B&gt;" set it to &lt;B&gt;1&lt;/B&gt; &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Restart stale HDFS services &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;All should be in order &lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 00:58:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225598#M77560</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-24T00:58:14Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225599#M77561</link>
      <description>&lt;P&gt;Dear Geoffrey , we do resboot twice before weeks , but this not help ( when we reboot we do actually remount , about &lt;EM&gt;&lt;STRONG&gt;d&lt;/STRONG&gt;&lt;STRONG&gt;fs&lt;/STRONG&gt;&lt;STRONG&gt;.datanode.failed.volumes.tolerated&lt;/STRONG&gt;" set it to &lt;STRONG&gt;1 , we want to set it to 0 ( we not want loose one disk )&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 02:39:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225599#M77561</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-24T02:39:53Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225600#M77562</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="https://community.hortonworks.com/questions/187829/@Michael%20Bronson"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;There could be a couple of reasons, lete check the obvious have you checked SE Linux on this host? if not &lt;/I&gt;&lt;/P&gt;&lt;PRE&gt;$ echo 0 &amp;gt;/selinux/enforce 
$ cat /selinux/enforce # should output "0" &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Read-only filesystem" is not a permissions issue. The mount has become read-only, either because of errors in the filesystem or problems in the device itself. If you run "grep sdc /proc/mounts" you should see it as "ro". There may be some clue as to why in the messages in /var/log/syslog. &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Run File system check fsck it will repair some of the errors e.g execute the fsck on an unmounted file system to avoid any data corruption issues. e.g&lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# fsck /dev/sdc&lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;That should repair the damages.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 03:17:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225600#M77562</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-24T03:17:40Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225601#M77563</link>
      <description>&lt;P&gt;Dear Geoffrey , /grid/sdc hold HDFS filesystem , dose fsck on that disk not risky?  , see also - &lt;A href="http://fibrevillage.com/storage/658-how-to-use-hdfs-fsck-command-to-identify-corrupted-files" target="_blank"&gt;http://fibrevillage.com/storage/658-how-to-use-hdfs-fsck-command-to-identify-corrupted-files&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 11:34:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225601#M77563</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-24T11:34:29Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225602#M77564</link>
      <description>&lt;P&gt;what you think about the following steps to fix corrupted files ( I take it from - &lt;A href="https://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hdfs-files" target="_blank"&gt;https://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hdfs-files&lt;/A&gt; )&lt;/P&gt;&lt;PRE&gt;to determine which files are having problems , this ignores lines with nothing but dots and lines talking about replication.
&lt;/PRE&gt;&lt;PRE&gt;hdfs fsck / | egrep -v '^\.+$' | grep -v eplica
&lt;BR /&gt;&lt;/PRE&gt;</description>
      <pubDate>Tue, 24 Apr 2018 11:48:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225602#M77564</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-24T11:48:44Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225603#M77565</link>
      <description>&lt;PRE&gt;Once you find a file that is corrupt
hdfs fsck /path/to/corrupt/file -locations -blocks -files
&lt;/PRE&gt;</description>
      <pubDate>Tue, 24 Apr 2018 11:50:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225603#M77565</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-24T11:50:39Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225604#M77566</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="https://community.hortonworks.com/questions/187829/@Michael%20Bronson"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Above you are trying to fix corrupt HDFS blocks !!  With the default replication factor of 3, you should be okay and below is fixing the filesystem&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;What is your filesystem type ext4 or? You can run &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# e2fsck -y /dev/sdc &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;You will not have an opportunity to validate the corrections being applied. On the other hand if you run &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# e2fsck -n /dev/sdc &lt;/PRE&gt;&lt;EM&gt;You can see what would happen without it actually being applied and if you run you'll be asked each&lt;/EM&gt;&lt;PRE&gt;# e2fsck /dev/sdc &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;time a significant correction needs to be applied.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 14:19:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225604#M77566</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-24T14:19:12Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225605#M77567</link>
      <description>&lt;P&gt;Dear Geoffrey , the filesystem is ext4 &lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 14:55:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225605#M77567</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-24T14:55:07Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225606#M77568</link>
      <description>&lt;P&gt;Dear Geoffrey , as you know before performing &lt;STRONG&gt;fsck /dev/sdc&lt;/STRONG&gt;&lt;STRONG&gt;&lt;/STRONG&gt;  , we must umount /grid/sdc , or umount -l /grid/sdc , only then we can run fsck /dev/sdc  , so can you approve finally the following steps:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;1. umount /grid/sdc or umount -l /grid/sdc in case devise is busy&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2. fsck /dev/sdc&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;?&lt;/P&gt;</description>
      <pubDate>Tue, 24 Apr 2018 15:02:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225606#M77568</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-24T15:02:47Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225607#M77569</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="https://community.hortonworks.com/questions/187829/@Michael%20Bronson"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;Any updates?  &lt;/P&gt;</description>
      <pubDate>Thu, 26 Apr 2018 02:35:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225607#M77569</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-26T02:35:13Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225608#M77570</link>
      <description>&lt;P&gt;Hi, Geoffrey&lt;/P&gt;&lt;P&gt;we just waiting for your approval about the following steps:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;1. umount /grid/sdc or umount -l /grid/sdc in case devise is busy&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2. fsck -y /dev/sdc&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;3. mount &lt;/STRONG&gt;&lt;STRONG&gt;/grid/sdc&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Apr 2018 02:38:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225608#M77570</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-26T02:38:02Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225609#M77571</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="https://community.hortonworks.com/questions/187829/@Michael%20Bronson"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Avahi is a system which facilitates service discovery on a local network via the mDNS/DNS-SD protocol suite. This enables you to plug your laptop or computer into a network and instantly be able to view other people who you can chat with, find printers to print to or find files being shared. Compatible technology is found in Apple MacOS X (branded Bonjour and sometimes Zeroconf) &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;The two big benefits of Avahi are name resolution &amp;amp; finding printers, but on a server, in a managed environment, it's of little value.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;unmounting and mount filesystems are a common thing especially in Hadoop clusters, your SysOps team should have validated that, but all looks correct to me.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Do a dry run with the below code to see what will be affected that will give you a better picture.&lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# e2fsck -n /dev/sdc &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;The data will be reconstructed as you have default replication factor  you can later rebalance the HDFS data  &lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Apr 2018 03:03:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225609#M77571</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-26T03:03:06Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225610#M77572</link>
      <description>&lt;P&gt;yes we already did it on one of the disks , see please - &lt;A href="https://community.hortonworks.com/questions/189016/datanode-machine-worker-one-of-the-disks-have-file.html" target="_blank"&gt;https://community.hortonworks.com/questions/189016/datanode-machine-worker-one-of-the-disks-have-file.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Apr 2018 03:05:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225610#M77572</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-26T03:05:19Z</dc:date>
    </item>
    <item>
      <title>Re: datanode + Directory is not writable</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225611#M77573</link>
      <description>&lt;P&gt;&lt;EM&gt; &lt;A href="https://community.hortonworks.com/questions/187829/@Michael%20Bronson"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;The disk is already unusable go-ahead run &lt;STRONG&gt;fsck &lt;/STRONG&gt;with a -y option to repair it &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;  see above &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Either way you will have to replace that dirty disk anyways!&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt; &lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Apr 2018 03:25:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-Directory-is-not-writable/m-p/225611#M77573</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-26T03:25:47Z</dc:date>
    </item>
  </channel>
</rss>

