<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Can not start HDFS which its data was deleted externally in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212569#M174505</link>
    <description>&lt;P&gt;So.. What are the steps for reinstall?&lt;/P&gt;&lt;P&gt;Is there any way to start from only HDP installation but keeping OS level changes as prerequisite and also ambari installation?&lt;/P&gt;&lt;P&gt;Does command &lt;STRONG&gt;ambari-server reset&lt;/STRONG&gt; work for that?&lt;/P&gt;</description>
    <pubDate>Mon, 04 Dec 2017 21:40:17 GMT</pubDate>
    <dc:creator>sedatkestepe</dc:creator>
    <dc:date>2017-12-04T21:40:17Z</dc:date>
    <item>
      <title>Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212566#M174502</link>
      <description>&lt;P&gt;Hello, &lt;/P&gt;&lt;P&gt;After a mass disk operation on our test environment, we have lost all the data in /data dir which was assigned as storage directory for Zookeeper, Hadoop and Falcon (the list yet we know) &lt;/P&gt;&lt;P&gt;Since it was our test cluster, data is not important but I don't want to reinstall all the components. I also want to learn how to recover the cluster running from this state.&lt;/P&gt;&lt;P&gt;In /data dir we only have folders but no files.&lt;/P&gt;&lt;P&gt;After struggling a little on ZKFailoverController, I was able to start it with -formatZK flag.&lt;/P&gt;&lt;P&gt;Now however, I am unable to start namenode(s) getting below exception:&lt;/P&gt;&lt;P&gt;10.0.109.12:8485: Directory /hadoop/hdfs/journal/testnamespace is in an inconsistent state: Can't format the storage directory because the current directory is not empty.&lt;/P&gt;&lt;P&gt;I have tried;&lt;/P&gt;&lt;P&gt;- removing lost+found folder on mount root,&lt;/P&gt;&lt;P&gt;- changing ownership of all folders under /data/hadoop/hdfs to hdfs:hadoop&lt;/P&gt;&lt;P&gt;- changing permission of all folders to 777 /data/hadoop/hdfs&lt;/P&gt;&lt;P&gt;PS: I have updated ownership of path /hadoop/hdfs/ which contains journal folder and it led me to move one step forward:&lt;/P&gt;&lt;P&gt;17/12/01 14:20:26 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot remove current directory: /data/hadoop/hdfs/namenode/current&lt;/P&gt;&lt;P&gt;PS: I have removed contents of /data/hadoop/hdfs/namenode/current and now it keeps checking 8485 ports of all Journal quorum nodes.&lt;/P&gt;&lt;P&gt;17/12/01 16:04:35 INFO ipc.Client: Retrying connect to server: bigdata2/10.0.109.11:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)&lt;/P&gt;&lt;P&gt;and keeps printing below line in hadoop-hdfs-zkfc-bigdata2.out file&lt;/P&gt;&lt;P&gt;Proceed formatting /hadoop-ha/testnamespace? (Y or N) Invalid input:&lt;/P&gt;&lt;P&gt;Do you have any suggestion?&lt;/P&gt;&lt;P&gt;Or should I give up?&lt;/P&gt;</description>
      <pubDate>Fri, 01 Dec 2017 22:11:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212566#M174502</guid>
      <dc:creator>sedatkestepe</dc:creator>
      <dc:date>2017-12-01T22:11:03Z</dc:date>
    </item>
    <item>
      <title>Re: Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212567#M174503</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/15282/skestepe.html" nodeid="15282"&gt;@Sedat Kestepe&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Since you don't care about data, from an HDFS perspective it is easier to reinstall your cluster. If you insist I can lead you through the recovery steps, but if I were you I would just reinstall at this point.&lt;/P&gt;</description>
      <pubDate>Sat, 02 Dec 2017 02:23:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212567#M174503</guid>
      <dc:creator>aengineer</dc:creator>
      <dc:date>2017-12-02T02:23:49Z</dc:date>
    </item>
    <item>
      <title>Re: Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212568#M174504</link>
      <description>&lt;P&gt;If recovery steps will take more than re-install and/or give me an unstable cluster then its better to reinstall.&lt;/P&gt;&lt;P&gt;What I anticipate from your answer you mean such costs, right?&lt;/P&gt;</description>
      <pubDate>Mon, 04 Dec 2017 18:55:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212568#M174504</guid>
      <dc:creator>sedatkestepe</dc:creator>
      <dc:date>2017-12-04T18:55:25Z</dc:date>
    </item>
    <item>
      <title>Re: Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212569#M174505</link>
      <description>&lt;P&gt;So.. What are the steps for reinstall?&lt;/P&gt;&lt;P&gt;Is there any way to start from only HDP installation but keeping OS level changes as prerequisite and also ambari installation?&lt;/P&gt;&lt;P&gt;Does command &lt;STRONG&gt;ambari-server reset&lt;/STRONG&gt; work for that?&lt;/P&gt;</description>
      <pubDate>Mon, 04 Dec 2017 21:40:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212569#M174505</guid>
      <dc:creator>sedatkestepe</dc:creator>
      <dc:date>2017-12-04T21:40:17Z</dc:date>
    </item>
    <item>
      <title>Re: Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212570#M174506</link>
      <description>&lt;P&gt;&lt;EM&gt;@&lt;A href="https://community.hortonworks.com/users/15282/skestepe.html"&gt;Sedat Kestepe&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;&lt;STRONG&gt;Stop the Hdfs service&lt;/STRONG&gt; if its running. &lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Start only the journal nodes&lt;/STRONG&gt; (as they will need to be made aware of the formatting) &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt; On the namenode (as user hdfs) &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# su - hdfs &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Format the namenode &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;$ hadoop namenode -format &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Initialize the Edits (for the journal nodes) &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;$ hdfs namenode -initializeSharedEdits -force &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Format Zookeeper (to force zookeeper to reinitialise) &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;$ hdfs zkfc -formatZK -force &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Using Ambari restart  the namenode &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;If  you are running HA name node then&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;On the second namenode Sync (force synch with first namenode) &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;$ hdfs namenode -bootstrapStandby -force &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;On every datanode clear the data directory  which is already done in your case&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Restart the HDFS service &lt;/STRONG&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Hope that helps&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 04 Dec 2017 22:15:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212570#M174506</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-12-04T22:15:49Z</dc:date>
    </item>
    <item>
      <title>Re: Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212571#M174507</link>
      <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/1271/sheltong.html" nodeid="1271"&gt;@Geoffrey Shelton Okot&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;I had tried &lt;STRONG&gt;hadoop namenode -format&lt;/STRONG&gt; before but tried again and received the same exception:&lt;/P&gt;&lt;P&gt;17/12/05 09:46:25 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not format one or more JournalNodes. 2 exceptions thrown:
10.0.109.11:8485: Directory /hadoop/hdfs/journal/testnamespace is in an inconsistent state: Can't format the storage directory because the current directory is not empty.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:482)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:558)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185)
at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:217)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:145)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)&lt;/P&gt;&lt;P&gt;This time additionally I deleted content of /hadoop/hdfs/journal/testnamespace but nothing changed. Command ended up with the same exception.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Dec 2017 16:09:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212571#M174507</guid>
      <dc:creator>sedatkestepe</dc:creator>
      <dc:date>2017-12-05T16:09:15Z</dc:date>
    </item>
    <item>
      <title>Re: Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212572#M174508</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="https://community.hortonworks.com/users/15282/skestepe.html"&gt;@Sedat Kestepe&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Can you delete the entry in zookeeper and restart&lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# locate zkCli.sh
/usr/hdp/2.x.x.x/zookeeper/bin/zkCli.sh
# /usr/hdp/2.x.x.x/zookeeper/bin/zkCli.sh 
[zk: localhost:2181(CONNECTED) 8] ls /hadoop-ha/ &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;You should see something like &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;[zk: localhost:2181(CONNECTED) 8] ls /hadoop-ha/xxxxx &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Delete the Hdfs ha config entry &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;[zk: localhost:2181(CONNECTED) 1] rmr /hadoop-ha &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Validate that there is no hadoop-ha entry, &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;[zk: localhost:2181(CONNECTED) 2] ls / &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Then restart the all components HDFS service. This will create a new ZNode with correct lock(of Failover controller).&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Please let me know if that helped.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 05 Dec 2017 19:03:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212572#M174508</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-12-05T19:03:55Z</dc:date>
    </item>
    <item>
      <title>Re: Can not start HDFS which its data was deleted externally</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212573#M174509</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1271/sheltong.html" nodeid="1271"&gt;@Geoffrey Shelton Okot&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Unfortunately, I couldn't start HDFS services this way, neither. Thank you very much though.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Dec 2017 22:24:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Can-not-start-HDFS-which-its-data-was-deleted-externally/m-p/212573#M174509</guid>
      <dc:creator>sedatkestepe</dc:creator>
      <dc:date>2017-12-05T22:24:15Z</dc:date>
    </item>
  </channel>
</rss>

