<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Databode uuid unassigned in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Databode-uuid-unassigned/m-p/202367#M164373</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/3548/bharathwgl.html" nodeid="3548"&gt;@Bharath N&lt;/A&gt;
&lt;/P&gt;&lt;P&gt;Try to perform the following steps on the failed DataNode:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Get the list of DataNode directories from /etc/hadoop/conf/hdfs-site.xml using the following command: &lt;PRE&gt;$ grep -A1 dfs.datanode.data.dir /etc/hadoop/conf/hdfs-site.xml
      &amp;lt;name&amp;gt;dfs.datanode.data.dir&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/data0/hadoop/hdfs/data,/data1/hadoop/hdfs/data,/data2/hadoop/hdfs/data,
/data3/hadoop/hdfs/data,/data4/hadoop/hdfs/data,/data5/hadoop/hdfs/data,/data6/hadoop/hdfs/data,
/data7/hadoop/hdfs/data,/data8/hadoop/hdfs/data,/data9/hadoop/hdfs/data&amp;lt;/value&amp;gt;&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Get datanodeUuid by grepping the DataNode log: &lt;PRE&gt;$ grep "datanodeUuid=" /var/log/hadoop/hdfs/hadoop-hdfs-datanode-$(hostname).log | head -n 1 | 
perl -ne '/datanodeUuid=(.*?),/ &amp;amp;&amp;amp; print "$1\n"'
1dacef53-aee2-4906-a9ca-4a6629f21347&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Copy over a VERSION file from one of the &amp;lt;dfs.datanode.data.dir&amp;gt;/current/ directories of a healthy running DataNode: &lt;PRE&gt;$ scp &amp;lt;healthy datanode host&amp;gt;:&amp;lt;dfs.datanode.data.dir&amp;gt;/current/VERSION ./&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Modify the datanodeUuid in the VERSION file with the datanodeUuid from the above grep search: &lt;PRE&gt;$ sed -i.bak -E 's|(datanodeUuid)=(.*$)|\1=1dacef53-aee2-4906-a9ca-4a6629f21347|' VERSION&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Blank out the storageID= property in the VERSION file: &lt;PRE&gt;$ sed -i.bak -E 's|(storageID)=(.*$)|\1=|' VERSION&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Copy this modified VERSION file to the current/ path of every directory listed in dfs.datanode.data.dir property of hdfs-site.xml:&lt;PRE&gt;$ for i in {0..9}; do cp VERSION /data$i/hadoop/hdfs/data/current/; done&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Change permissions on this VERSION file to be owned by hdfs:hdfs with permissions 644:&lt;PRE&gt;$ for i in {0..9}; do chown hdfs:hdfs /data$i/hadoop/hdfs/data/current/VERSION; done
$ for i in {0..9}; do chmod 664 /data$i/hadoop/hdfs/data/current/VERSION; done&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;One more level down, there is a different VERSION file located under the Block Pool current folder at: &lt;PRE&gt;/data0/hadoop/hdfs/data/current/BP-*/current/VERSION&lt;/PRE&gt; This file does not need to be modified -- just place then in the appropriate directories.&lt;/LI&gt;&lt;LI&gt;Copy over this particular VERSION file from a healthy DataNode into the current/BP-*/current/ folder for each directory listed in dfs.datanode.data.dir of hdfs-site.xml: &lt;PRE&gt;$ scp &amp;lt;healthy datanode host&amp;gt;:&amp;lt;dfs.datanode.data.dir&amp;gt;/current/BP-*/current/VERSION ./VERSION2
$ for i in {0..9}; do cp VERSION2 /data$i/hadoop/hdfs/data/current/BP-*/current/VERSION; done&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Change permissions on this VERSION file to be owned by hdfs:hdfs with permissions 644: &lt;PRE&gt;$ for i in {0..9}; do chown hdfs:hdfs /data$i/hadoop/hdfs/data/current/BP-*/current/VERSION; done
$ for i in {0..9}; do chmod 664 /data$i/hadoop/hdfs/data/current/BP-*/current/VERSION; done&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;Restart DataNode from Ambari.&lt;/LI&gt;&lt;LI&gt;The VERSION file located at &amp;lt;dfs.datanode.data.dir&amp;gt;/current/VERSION will have its storageID repopulated with a regenerated ID.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;If any data is not an issue (say, for example, the node was previously in a different cluster, or was out of service for an extended time), then&lt;/P&gt;&lt;UL&gt;
&lt;LI&gt;delete all data and directories in the dfs.datanode.data.dir (keep that directory, though),&lt;/LI&gt;&lt;LI&gt;restart the data node daemon or servic&lt;/LI&gt;&lt;/UL&gt;</description>
    <pubDate>Wed, 23 May 2018 12:49:15 GMT</pubDate>
    <dc:creator>bandarusridhar1</dc:creator>
    <dc:date>2018-05-23T12:49:15Z</dc:date>
  </channel>
</rss>

