<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Create a Hive table with HDFS RBF location in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/408450#M252710</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I'm trying to insert data into a hive table configured with hdfs router based federation as table location&lt;/P&gt;&lt;P&gt;Location: | hdfs://router_host:8888/router/router.db/router_test_table&lt;/P&gt;&lt;P&gt;Cluster is kebrerized and all components including Hive and RBF are working as expected except this specific use case.&lt;/P&gt;&lt;P&gt;The hive table insert job is failing with kerberos error when using RBF as the table location&lt;BR /&gt;&lt;BR /&gt;Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: DestHost:destPort router_host:8888 , LocalHost:localPort datanode_host. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:639)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:563)&lt;BR /&gt;... 17 more&lt;BR /&gt;Caused by: java.io.IOException: DestHost:destPort router_host:8888 , LocalHost:localPort datanode_host. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)&lt;BR /&gt;at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)&lt;BR /&gt;at java.lang.reflect.Constructor.newInstance(Constructor.java:423)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:888)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1616)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1558)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1455)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)&lt;/P&gt;</description>
    <pubDate>Fri, 16 May 2025 16:08:45 GMT</pubDate>
    <dc:creator>Hadoop16</dc:creator>
    <dc:date>2025-05-16T16:08:45Z</dc:date>
    <item>
      <title>Create a Hive table with HDFS RBF location</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/408450#M252710</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I'm trying to insert data into a hive table configured with hdfs router based federation as table location&lt;/P&gt;&lt;P&gt;Location: | hdfs://router_host:8888/router/router.db/router_test_table&lt;/P&gt;&lt;P&gt;Cluster is kebrerized and all components including Hive and RBF are working as expected except this specific use case.&lt;/P&gt;&lt;P&gt;The hive table insert job is failing with kerberos error when using RBF as the table location&lt;BR /&gt;&lt;BR /&gt;Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: DestHost:destPort router_host:8888 , LocalHost:localPort datanode_host. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:639)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:563)&lt;BR /&gt;... 17 more&lt;BR /&gt;Caused by: java.io.IOException: DestHost:destPort router_host:8888 , LocalHost:localPort datanode_host. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)&lt;BR /&gt;at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)&lt;BR /&gt;at java.lang.reflect.Constructor.newInstance(Constructor.java:423)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:888)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1616)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1558)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1455)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)&lt;/P&gt;</description>
      <pubDate>Fri, 16 May 2025 16:08:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/408450#M252710</guid>
      <dc:creator>Hadoop16</dc:creator>
      <dc:date>2025-05-16T16:08:45Z</dc:date>
    </item>
    <item>
      <title>Re: Create a Hive table with HDFS RBF location</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/411052#M252979</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/109626"&gt;@Hadoop16&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;This stack error usually happens when you have an inconsistency on the jdk versions.&lt;/P&gt;&lt;P&gt;Try to check different versions you have in HDFS and Hive.&lt;/P&gt;&lt;P&gt;You can also try to export your java_home.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Reference:&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/t5/Internal/ERROR-quot-Failed-on-local-exception-java-io-IOException-org/ta-p/332526" target="_blank" rel="noopener"&gt;https://community.cloudera.com/t5/Internal/ERROR-quot-Failed-on-local-exception-java-io-IOException-org/ta-p/332526&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 02 Jul 2025 00:55:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/411052#M252979</guid>
      <dc:creator>Shmoo</dc:creator>
      <dc:date>2025-07-02T00:55:13Z</dc:date>
    </item>
    <item>
      <title>Re: Create a Hive table with HDFS RBF location</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/411095#M252985</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/109626"&gt;@Hadoop16&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;Seems like a Java issue.&lt;/P&gt;&lt;P&gt;Please check the following article&lt;BR /&gt;&lt;A href="https://community.cloudera.com/t5/Support-Questions/AccessControlException-Client-cannot-authenticate-via-TOKEN/td-p/347406" target="_blank"&gt;https://community.cloudera.com/t5/Support-Questions/AccessControlException-Client-cannot-authenticate-via-TOKEN/td-p/347406&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 02 Jul 2025 15:20:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/411095#M252985</guid>
      <dc:creator>JoseManuel</dc:creator>
      <dc:date>2025-07-02T15:20:05Z</dc:date>
    </item>
    <item>
      <title>Re: Create a Hive table with HDFS RBF location</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/413313#M253998</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/109626"&gt;@Hadoop16&lt;/a&gt;&amp;nbsp;FYI&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;DIV&gt;&lt;SPAN&gt;➤&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;This error occurs because of a token delegation gap between Hive and the HDFS Router.&lt;/DIV&gt;&lt;DIV&gt;In a Kerberized cluster, when Hive (running on a DataNode/Compute node via Tez or MapReduce) attempts to write to HDFS, it needs a Delegation Token. When you use an HDFS Router address, Hive must be explicitly told to obtain a token specifically for the Router's service principal, which may be different from the backend NameNodes.&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;➤&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;The Root Cause&lt;/DIV&gt;&lt;DIV&gt;The error Client cannot authenticate via:[TOKEN, KERBEROS] at the FileSinkOperator stage indicates that the tasks running on your worker nodes do not have a valid token to "speak" to the Router at router_host:8888.&lt;/DIV&gt;&lt;DIV&gt;When Hive plans the job, it usually fetches tokens for the default filesystem. If your fs.defaultFS is set to a regular NameNode but your table location is an RBF address, Hive might not be fetching the secondary token required for the Router.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;➤&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&amp;nbsp;The Fix: Configure Token Requirements&lt;/DIV&gt;&lt;DIV&gt;You need to ensure Hive and the underlying MapReduce/Tez framework know to fetch tokens for the Router's URI.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;1. Add the Router URI to Hive's Token List&lt;/DIV&gt;&lt;DIV&gt;In your Hive session (or globally in hive-site.xml), you must define the Router as a "known" filesystem that requires tokens.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;SET hive.metastore.token.signature=hdfs://router_host:8888;&lt;/DIV&gt;&lt;DIV&gt;SET mapreduce.job.hdfs-servers=hdfs://router_host:8888,hdfs://nameservice-backend;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;2. Configure HDFS Client to "Trust" the Router for Tokens&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;In core-site.xml or hdfs-site.xml, you need to enable the Router to act as a proxy for the backend NameNodes so it can pass the tokens correctly.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;lt;property&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;lt;name&amp;gt;dfs.federation.router.delegation.token.enable&amp;lt;/name&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;lt;/property&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;➤&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&amp;nbsp;Critical Kerberos Configuration&lt;/DIV&gt;&lt;DIV&gt;Because the Router is an intermediary, it must be allowed to impersonate the user (Hive) when talking to the backend. Ensure your ProxyUser settings in core-site.xml include the Router's service principal.&lt;/DIV&gt;&lt;DIV&gt;Assuming your Router runs as the hdfs or router user:&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;lt;property&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;lt;name&amp;gt;hadoop.proxyuser.router.groups&amp;lt;/name&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;lt;value&amp;gt;*&amp;lt;/value&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;lt;/property&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;lt;property&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;lt;name&amp;gt;hadoop.proxyuser.router.hosts&amp;lt;/name&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;lt;value&amp;gt;*&amp;lt;/value&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;lt;/property&amp;gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;➤&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Diagnostic Verification&lt;/DIV&gt;&lt;DIV&gt;To prove if the token is missing, run this command from the datanode_host mentioned in your error logs using the same user running the Hive job:&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;# Check if you can manually get a token for the router&lt;/DIV&gt;&lt;DIV&gt;hdfs fetchdt --renewer hdfs hdfs://router_host:8888 router.token&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;# Check the contents of your current credentials cache&lt;/DIV&gt;&lt;DIV&gt;klist -f&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;If fetchdt fails, the issue is with the Router's ability to issue tokens. If it succeeds but Hive fails, the issue is with Hive's Job Submission not including the Router URI in the mapreduce.job.hdfs-servers list.&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 10 Jan 2026 06:56:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Create-a-Hive-table-with-HDFS-RBF-location/m-p/413313#M253998</guid>
      <dc:creator>9een</dc:creator>
      <dc:date>2026-01-10T06:56:47Z</dc:date>
    </item>
  </channel>
</rss>

