Support Questions

Find answers, ask questions, and share your expertise

I have just setup kerberos witn hadoop. unable to copy files from local to hadoop

avatar
Contributor

Here is the error:

2017-05-22 17:15:25,294 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: infoobjects-Latitude-3550:1025:DataXceiver error processing unknown operation src: /127.0.0.1:35436 dst: /127.0.0.1:1025 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2207) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiationCipherOptions(DataTransferSaslUtil.java:233) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:369) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:297) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:124) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:185) at java.lang.Thread.run(Thread.java:745)

My hdfs-site.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value> <!-- Default is 1 -->
</property>
 <property>
 <name>dfs.namenode.name.dir</name>
 <value>file:///home/priyanshu/hadoop_data/hdfs/namenode</value>
 </property>
 <property>
 <name>dfs.datanode.data.dir</name>
 <value>file:///home/priyanshu/hadoop_data/hdfs/datanode</value>
</property>
<!-- NameNode security config -->
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<!---Secondary NameNode config-->
<property>
<name>dfs.secondary.namenode.keytab.file</name>
<value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
</property>
<property>
<name>dfs.secondary.namenode.kerberos.principal</name>
<value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<!---DataNode config-->
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1025</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1027</value>
</property>
<property>
<name>dfs.data.transfer.protection</name>
<value>authentication</value>
</property>
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>
<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value> <!-- path to the HTTP keytab -->
</property>
<property>
        <name>dfs.namenode.kerberos.internal.spnego.principal</name> 
        <value>${dfs.web.authentication.kerberos.principal}</value>          
</property>  
<property>
        <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name> 
        <value>>${dfs.web.authentication.kerberos.principal}</value>          
</property>
</configuration>
1 ACCEPTED SOLUTION

avatar
Contributor

Resolved the issue, there was one property missing in hdfs-site.xm:

<property> <name>dfs.block.access.token.enable</name> <value>true</value> </property>

View solution in original post

3 REPLIES 3

avatar

Hi @priyanshu hasija,

We can ignore the first two lines of exception as suggested in https://community.hortonworks.com/questions/35089/dataxceiver-error-processing-unknown-operation-jav....

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: infoobjects-Latitude-3550:1025:DataXceiver error processing unknown operation src: /127.0.0.1:35436 dst: /127.0.0.1:1025 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2207) at

The following exception indicates that "kinit" is not being done. Can you please do kinit before running the hdfs copyFromLocal command ? More Info - https://community.hortonworks.com/articles/4755/common-kerberos-errors-and-solutions.html

 at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake

avatar
Master Mentor

@priyanshu hasija,

I assume you are on linux

Check the valid principal for the hdfs keytab

[root@toronto ~]# klist -kt /etc/security/keytabs/hdfs.headless.keytab 
Keytab name: 
FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp         Principal
---- ----------------- --------------------------------------------------------
   1 05/08/17 23:33:51 hdfs-has@HASIJA.COM
   1 05/08/17 23:33:51 hdfs-has@HASIJA.COM
   1 05/08/17 23:33:51 hdfs-has@HASIJA.COM
   1 05/08/17 23:33:51 hdfs-has@HASIJA.COM
   1 05/08/17 23:33:51 hdfs-has@HASIJA.COM

Kinit using the hdfs keytab and principal

[root@toronto ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab  hdfs-has@HASIJA.COM 

Local file system

[root@toronto ~]# ls 
anaconda-ks.cfg  authorized_keys  install.log  install.log.syslog 

Now copy to hdfs

[root@toronto ~]# hdfs dfs -copyFromLocal  authorized_keys /user/admin 

Check the copy was successful

 [root@toronto ~]# hdfs dfs -ls /user/admin
Found 1 items -rw-r--r--   3 hdfs hdfs        405 2017-05-22 20:41 /user/admin/authorized_keys

There you go

avatar
Contributor

Resolved the issue, there was one property missing in hdfs-site.xm:

<property> <name>dfs.block.access.token.enable</name> <value>true</value> </property>