<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: [CDH 5.10 upgrade] Wrong FS Hive tables in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/63886#M53388</link>
    <description>&lt;P&gt;&lt;SPAN&gt;Unfortunately this happened again in 5.13.1 as I did&amp;nbsp;"&lt;/SPAN&gt;&lt;SPAN&gt;Update Hive Metastore NameNodes" and it added the port twice.&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 22 Jan 2018 15:48:51 GMT</pubDate>
    <dc:creator>maziyar</dc:creator>
    <dc:date>2018-01-22T15:48:51Z</dc:date>
    <item>
      <title>[CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50410#M53380</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I just upgraded my cluster from 5.9 to 5.10 last night. Now I am seeing the problem of "Wrong FS" with the famouse duplicate "8020:8020" in my exsiting tables HDFS URI.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The new tables are fine. So it means something went wrong during the upgrade.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've seen one solution is to alter the location of the exisitng tables. But my problem is, due to the wrong fs the alter and drop both fail (same error for drop table):&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hive&amp;gt; alter table&lt;SPAN&gt;mytable&lt;/SPAN&gt; set location "hdfs://hadoop-master-1:8020/user/maziyar/warehouse/&lt;SPAN&gt;mytable&lt;/SPAN&gt;";&lt;BR /&gt;FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Got exception: java.io.IOException Incomplete HDFS URI, no host: hdfs://hadoop-master-1:8020:8020/user/maziyar/warehouse/&lt;SPAN&gt;mytable&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;My question is, how can I fix this? Neither alter nor drop works. I am kind of stuck &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;PS: I upgraded my cluster from parcel. Everything else seems fine so far.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Many thanks.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Maziyar&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 11:01:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50410#M53380</guid>
      <dc:creator>maziyar</dc:creator>
      <dc:date>2022-09-16T11:01:04Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50411#M53381</link>
      <description>&lt;P&gt;I have more info. After upgrading to CDH 5.10 I ran "&lt;SPAN class="command-container"&gt;&lt;SPAN&gt;Update Hive Metastore NameNodes" from Cloudera Manager. And that made the duplicates port into HiveMetaTool.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="command-container"&gt;&lt;SPAN&gt;I checked with the new table that was working, after updaing metastore namenodes now it has duplicate port in its URI.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="command-container"&gt;&lt;SPAN&gt;Is there a way to fix this in:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="command-container"&gt;&lt;SPAN&gt;/usr/lib/cmf/service/hive/hive.sh&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="command-container"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="command-container"&gt;&lt;SPAN&gt;Many thanks,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="command-container"&gt;&lt;SPAN&gt;maziyar&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 04 Feb 2017 11:56:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50411#M53381</guid>
      <dc:creator>maziyar</dc:creator>
      <dc:date>2017-02-04T11:56:32Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50413#M53382</link>
      <description>&lt;P&gt;I also tried metatool to updatehe locaton but it didn't work.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hive --config /etc/hive/conf/conf.server --service metatool -updateLocation "hdfs://hadoop-master-1:8020" "hdfs://hadoop-master-1:8020:8020"&lt;BR /&gt;Initializing HiveMetaTool..&lt;BR /&gt;HiveMetaTool:A valid host is required in both old-loc and new-loc&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Ok now I tried everything possible. No way to update location&amp;nbsp;nor drop the tables.&lt;/P&gt;</description>
      <pubDate>Sat, 04 Feb 2017 12:34:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50413#M53382</guid>
      <dc:creator>maziyar</dc:creator>
      <dc:date>2017-02-04T12:34:20Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50414#M53383</link>
      <description>&lt;P&gt;Ok, the only thing worked.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I update the table DBS in warehouse databse in MySQL with a correct URI. Then the alter table .. set location worked on all the exisitng tables.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So I am not sure if there is a bug in "/usr/lib/cmf/service/hive/hive.s" when you use "&lt;SPAN&gt;Update Hive Metastore NameNodes" or this only should be enabled when you have HA enabled (I didn't!). Either way, this added the duplicate ports.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best,&lt;/P&gt;&lt;P&gt;Maziyar&lt;/P&gt;</description>
      <pubDate>Sat, 04 Feb 2017 12:59:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50414#M53383</guid>
      <dc:creator>maziyar</dc:creator>
      <dc:date>2017-02-04T12:59:41Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50421#M53384</link>
      <description>I am going to go with bug and will try to test this out to confirm so an official bug report can be submitted. I think I ran into something similar. I wasn't upgrading; fresh install. I did restore a metastore from an older version and ran the same command to update NN URI for Hive. The tables were updated correctly if I recall correctly but the DB locations were not. They had the same double port entry made.&lt;BR /&gt;&lt;BR /&gt;I did the same method to fix by updating the DBS table in the metastore DB directly; I didn't no try the other command you listed.&lt;BR /&gt;&lt;BR /&gt;This was on CDH 5.8.2.</description>
      <pubDate>Sat, 04 Feb 2017 22:16:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/50421#M53384</guid>
      <dc:creator>mbigelow</dc:creator>
      <dc:date>2017-02-04T22:16:34Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/60179#M53385</link>
      <description>&lt;P&gt;I would say this is a bug. If the user isnt supposed to be performing a certain action (updating the name node URIs in this case), then either the UI should have prevented the user from performing the action or should have done nothing during the action if the cluster was not HA-enabled for NN. Manipulating the metadata is bad. I will file an internal jira. Thank you for reporting.&lt;/P&gt;</description>
      <pubDate>Wed, 20 Sep 2017 16:24:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/60179#M53385</guid>
      <dc:creator>NaveenGangam</dc:creator>
      <dc:date>2017-09-20T16:24:13Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/61807#M53386</link>
      <description>&lt;P&gt;I am not sure if this was resolved but after i upgraded from 5.11.1 to 5.13.0 i am seeing this error in spark2-shell&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;scala&amp;gt; spark.sqlContext.sql("CREATE TABLE IF NOT EXISTS default.employee_test123(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'")&lt;BR /&gt;java.lang.IllegalArgumentException: Wrong FS: hdfs://abc23.xxx.com:8020/user/hive/warehouse/employee_test123, expected: hdfs://nameservice1&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:662)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:482)&lt;BR /&gt;at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:231)&lt;BR /&gt;at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:200)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have followed the instructions in&amp;nbsp;&lt;A href="https://www.cloudera.com/documentation/enterprise/5-11-x/topics/cdh_hag_hdfs_ha_cdh_components_config.html#topic_2_6_3" target="_blank"&gt;https://www.cloudera.com/documentation/enterprise/5-11-x/topics/cdh_hag_hdfs_ha_cdh_components_config.html#topic_2_6_3&lt;/A&gt; but still seeing the same issue.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 14 Nov 2017 14:44:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/61807#M53386</guid>
      <dc:creator>desind</dc:creator>
      <dc:date>2017-11-14T14:44:20Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/61977#M53387</link>
      <description>&lt;P&gt;i met such error too, and the solution is update the metastore database with sql,though i use oracle not mysql.&lt;/P&gt;</description>
      <pubDate>Sun, 19 Nov 2017 04:38:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/61977#M53387</guid>
      <dc:creator>VictorMa</dc:creator>
      <dc:date>2017-11-19T04:38:41Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.10 upgrade] Wrong FS Hive tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/63886#M53388</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Unfortunately this happened again in 5.13.1 as I did&amp;nbsp;"&lt;/SPAN&gt;&lt;SPAN&gt;Update Hive Metastore NameNodes" and it added the port twice.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jan 2018 15:48:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-10-upgrade-Wrong-FS-Hive-tables/m-p/63886#M53388</guid>
      <dc:creator>maziyar</dc:creator>
      <dc:date>2018-01-22T15:48:51Z</dc:date>
    </item>
  </channel>
</rss>

