<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: hadoop nodes from SUSE to RHEL in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104803#M25499</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2348/divakarreddya.html" nodeid="2348"&gt;@Divakar Annapureddy&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Does this mean that we can't restore the cluster with the meta-data/data backups we take before the cluster re-install?&lt;/P&gt;</description>
    <pubDate>Tue, 19 Apr 2016 02:57:58 GMT</pubDate>
    <dc:creator>rbalam</dc:creator>
    <dc:date>2016-04-19T02:57:58Z</dc:date>
    <item>
      <title>hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104801#M25497</link>
      <description>&lt;P&gt;What is the best approach to restore HDP cluster if a customer would like to migrate from SUSE OS on an existing HDP cluster to RHEL OS?&lt;/P&gt;&lt;P&gt;Is it same as re-installing OS/HDP cluster and restoring it with the backup data/config? please advise.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:14:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104801#M25497</guid>
      <dc:creator>rbalam</dc:creator>
      <dc:date>2022-09-16T10:14:30Z</dc:date>
    </item>
    <item>
      <title>Re: hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104802#M25498</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/392/rbalam.html" nodeid="392"&gt;@rbalam&lt;/A&gt;
&lt;/P&gt;&lt;P&gt;SUSE &amp;amp; RHEL repos/rpms are different, I don't think simple migration works here like upgrading Linux version from RHEL6 to RHEL7.&lt;/P&gt;&lt;P&gt;&lt;A href="http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Installing_HDP_AMB/content/_ambari_repositories.html" target="_blank"&gt;http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Installing_HDP_AMB/content/_ambari_repositories.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Apr 2016 02:24:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104802#M25498</guid>
      <dc:creator>divakarreddy_a</dc:creator>
      <dc:date>2016-04-19T02:24:58Z</dc:date>
    </item>
    <item>
      <title>Re: hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104803#M25499</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2348/divakarreddya.html" nodeid="2348"&gt;@Divakar Annapureddy&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Does this mean that we can't restore the cluster with the meta-data/data backups we take before the cluster re-install?&lt;/P&gt;</description>
      <pubDate>Tue, 19 Apr 2016 02:57:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104803#M25499</guid>
      <dc:creator>rbalam</dc:creator>
      <dc:date>2016-04-19T02:57:58Z</dc:date>
    </item>
    <item>
      <title>Re: hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104804#M25500</link>
      <description>&lt;P&gt;good question but I'm not a product engineer to comment on this. But normally we are seeing huge compatibles issues between same flavor of Linux Ex : Cent OS 6 &amp;amp; Cent OS 7&lt;/P&gt;&lt;P&gt; I think, your SUSE OS hadoop metadata backups doesn't work for RHEL OS.&lt;/P&gt;</description>
      <pubDate>Tue, 19 Apr 2016 03:08:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104804#M25500</guid>
      <dc:creator>divakarreddy_a</dc:creator>
      <dc:date>2016-04-19T03:08:18Z</dc:date>
    </item>
    <item>
      <title>Re: hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104805#M25501</link>
      <description>&lt;P&gt;I am actually pretty sure that most backups will still work. Sure all the RPMs etc. will be different but lets go through it one by one:&lt;/P&gt;&lt;P&gt;a) HDFS data, should really not depend on the OS unless the data is switched from big to little endian or something. &lt;/P&gt;&lt;P&gt;b) Databases ( ambari, hive, oozie, ... ) &lt;/P&gt;&lt;P&gt;Should also not depend on the OS. Depends on the database obviously but if you do an export import you should be fine. Simply copying the files over might be a different matter.&lt;/P&gt;&lt;P&gt;Now you would need to change the hostnames inside the backups, for hive that is a single location for the others it could be more complicated. Unless you migrate the hostnames 1:1 &lt;/P&gt;&lt;P&gt;c) configs? I think the easiest way here would be blueprints. ( i.e. export one and setup the new cluster with it ) OR install clean and apply settings carefully. Which might be safer to make any modifications that are needed.&lt;/P&gt;&lt;P&gt;d) timeline store, spark history,  etc. are most likely not needed to keep.&lt;/P&gt;&lt;P&gt;But yeah it might be safer to setup the new cluster and distcp the data over instead of copying the namenode/datanode folders over. However I really don't think the OS should affect them. ( Never did it though fair warning ) &lt;/P&gt;&lt;P&gt;My tip would be try it  it on a sandbox ( install a single node suse make a table, an oozie job and a couple files and then migrate everything  ) &lt;/P&gt;</description>
      <pubDate>Tue, 19 Apr 2016 04:14:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104805#M25501</guid>
      <dc:creator>bleonhardi</dc:creator>
      <dc:date>2016-04-19T04:14:44Z</dc:date>
    </item>
    <item>
      <title>Re: hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104806#M25502</link>
      <description>&lt;P&gt;Thanks Benjamin for detailed explanation&lt;/P&gt;</description>
      <pubDate>Tue, 19 Apr 2016 04:31:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104806#M25502</guid>
      <dc:creator>divakarreddy_a</dc:creator>
      <dc:date>2016-04-19T04:31:32Z</dc:date>
    </item>
    <item>
      <title>Re: hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104807#M25503</link>
      <description>&lt;P&gt;As Benjamin said, strongly encourage you to establish your process with a small test cluster first.  However, I do not expect problems with the data.  Hadoop is written in Java, so the form of data should be same between operating systems, especially all Linux variants.  &lt;/P&gt;&lt;P&gt;Warning: Do not upgrade both operating system and HDP version all at once!  Change one major variable at a time, and make sure the system is stable in between.  So go ahead and change OS, but keep the HDP version the same until you are done and satisfied with the state of the new OS.&lt;/P&gt;&lt;P&gt;The biggest potential gotcha is if you experience ClusterID mismatch as a result of your backup and restore process. If you are backing up the data by distcp-ing it between clusters, then this won't be an issue; the &lt;STRONG&gt;namespaceID/clusterID/blockpoolID&lt;/STRONG&gt; probably will change, but it won't matter since distcp actually creates new files.  But if you are trying to use traditional file-based backup and restore, from tape or a SAN, then you may experience this: After you think you've fully restored, and you try to start up HDFS it will tell you you need to format the file system, or the hdfs file system may simply appear empty despite the files all being back in place.  If this happens, "ClusterID mismatch" is the first thing to check, starting with &lt;A href="http://hortonworks.com/blog/hdfs-metadata-directories-explained/"&gt;http://hortonworks.com/blog/hdfs-metadata-directories-explained/&lt;/A&gt; for background.  Won't say more because you probably won't have the problem and it will be confusing to talk about in the abstract.&lt;/P&gt;</description>
      <pubDate>Wed, 20 Apr 2016 02:04:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104807#M25503</guid>
      <dc:creator>mfoley</dc:creator>
      <dc:date>2016-04-20T02:04:08Z</dc:date>
    </item>
    <item>
      <title>Re: hadoop nodes from SUSE to RHEL</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104808#M25504</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/536/mfoley.html" nodeid="536"&gt;@Matt Foley&lt;/A&gt;Thanks for additional information. This is very helpful&lt;/P&gt;</description>
      <pubDate>Thu, 21 Apr 2016 01:17:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/hadoop-nodes-from-SUSE-to-RHEL/m-p/104808#M25504</guid>
      <dc:creator>rbalam</dc:creator>
      <dc:date>2016-04-21T01:17:20Z</dc:date>
    </item>
  </channel>
</rss>

