Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

FUSE, keytabs, and fstab..

avatar
New Contributor

I'm trying to get FUSE set up to auto-mount HDFS from fstab. This is documented (https://www.cloudera.com/documentation/enterprise/5-13-x/topics/cdh_ig_hdfs_mountable.html), but I'm having trouble getting it to work.

 

The general requirements are in place -- ubuntu 16.04 with the Cloudera repository added, hadoop-hdfs-fuse (and dependencies) installed. Our cluster is set up with HA, and the configurations are such that I can mount HA through the fuse client. As a user, I can mount hadoop -- `sudo hadoop-fuse-dfs -d hdfs://hdsdata /mnt/hdfs/` -- this works.

 

What I can't achieve is having hdfs mounted at system startup. If I try, `ls /mnt/hdfs`, I'm greeted with "ls: cannot access '/mnt/hdfs': Transport endpoint is not connected." This seems to be related to kerberos principals -- while mounting as a user (even with sudo), fuse-dfs uses my user's kerberos ticket for authentication. So, I've created a keytab for the host,

 

Keytab name: FILE:/etc/krb5.keytab
   2 01/15/2019 13:41:34 hdfs/fs-03.internal.mydomain.com@INTERNAL.MYDOMAIN.COM (aes256-cts-hmac-sha1-96)

 

and this keytab is duplicated to:Keytab name: FILE:/etc/security/keytabs/dn.service.keytab
   2 01/15/2019 13:41:34 hdfs/fs-03.internal....

 

-- note that there is only one line/principal in the keytab.

 

With this, I'm trying to give the _host_ permission to mount the hadoop filesystem. However, if I remove my ticket (rm /tmp/krb5cc_1000) and try to `sudo hadoop-fuse-dfs -d hdfs://hdsdata /mnt/hdfs/`, I'm greeted by I/O errors. Likewise if I reboot and have the system mount from fstab.

 

The keytab is referenced in hdfs-site.xml,

/etc/hadoop/conf/hdfs-site.xml:

<name>dfs.datanode.keytab.file</name>
<value>/etc/security/keytabs/dn.service.keytab</value>

-

<name>dfs.datanode.kerberos.principal</name>
<value>hdfs/_HOST@INTERNAL.MYDOMAIN.COM</value>

 

 

Does anyone have input on how to mount HDFS via fstab and have everything accessible to the users of the system this way?

 

=======

/var/log/syslog

Jan 16 12:15:46 fs-03 mount[1352]: unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40, pid: 2577
Jan 16 12:15:46 fs-03 mount[1352]: statfs /
Jan 16 12:15:46 fs-03 mount[1352]: fuseNewConnect: failed to find Kerberos ticket cache file '/tmp/krb5cc_0'.  Did you remember to kinit for UID 0?
Jan 16 12:15:46 fs-03 mount[1352]: fuseConnect(usrname=root): fuseNewConnect failed with error code -13
Jan 16 12:15:46 fs-03 mount[1352]: fuseConnectAsThreadUid: failed to open a libhdfs connection!  error -13.
Jan 16 12:15:46 fs-03 mount[1352]:    unique: 2, error: -5 (Input/output error), outsize: 16

1 REPLY 1

avatar
New Contributor

It turns out I had some incorrect conceptions, but things still aren't happening as I'd like.

 

First, you can FUSE-mount the mountpoint, and let it sit there. Any user who tries to access it (root, demouser, demoadmin, ..) will try to use their own kerberos ticket to access the mountpoint.

unique: 6, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 18419
getattr /
fuseNewConnect: failed to find Kerberos ticket cache file '/tmp/krb5cc_0'.  Did you remember to kinit for UID 0?
fuseConnect(usrname=root): fuseNewConnect failed with error code -13
fuseConnectAsThreadUid: failed to open a libhdfs connection!  error -13.
   unique: 6, error: -5 (Input/output error), outsize: 16
unique: 7, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 18643
getattr /
hdfsBuilderConnect(forceNewInstance=1, nn=hdfs://optimusdata, port=0, kerbTicketCachePath=/tmp/krb5cc_1000, userName=demouser) error:
LoginException: Unable to obtain password from user
org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: demouser using ticket cache file: /tmp/krb5cc_1000 javax.security.auth.login.LoginException: Unable to obtain password from user

        at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1846)
        ...
fuseNewConnect(usrname=demouser): Unable to create fs: error code 255
fuseConnect(usrname=demouser): fuseNewConnect failed with error code 255
fuseConnectAsThreadUid: failed to open a libhdfs connection!  error 255.
   unique: 15, error: -5 (Input/output error), outsize: 16

with mount option debug, it shows that the kerberos ticket that's attempted to be used is changing as each user does `ls /mnt/hdfs`.

 

Still, I want to use a service principal for Hadoop/kerberos access, and let the local system do user-based authentication (or even just expose the whole thing to any logged-in user, or use uid/gid mount options).

 

It feels like there is an error somewhere.

$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: hdfs/fs-03.internal.mydomain.com@INTERNAL.MYDOMAIN.COM 01/17/2019 14:36:23 krbtgt/INTERNAL.MYDOMAIN.COM@INTERNAL.MYDOMAIN.COM

 

klist shows that the principal for my user should be the HDFS service principal. It shows that the referenced kerberos ticket file does exist, and is being referenced. Nevertheless, the fuse client is trying to authenticate by the username 'demouser'. Does anyone know a way of changing this?