Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here. Want to know more about what has changed? Check out the Community News blog.

HDFS Fuse connector dies

HDFS Fuse connector dies

New Contributor

Hi everybody.

 

Until now, we have been using HDFS-fuse connector for intensive reads from several web servers with a fuse mount point without problems.

However, recently we have expanded our functionality by allowing users to upload big files to HDFS via webdav from these web servers. It works

without problem for an hour or so, but then suddenly fuse connectios raises an exception and dies. Only way to recover is by unmount an 

mounting again.

 

The problem is clearly located at fuse side. And we have also realized that memory consumption in fuse increases up to use all memory available

from the machine (a pair of gigabits) although /etc/default/hadoop-fuse is limited at 512MB. (export LIBHDFS_OPTS="-Xms128m -Xmx512m").

Launching mount by hand via "fuse_dfs" instead of "hadoop-hdfs-fuse" gives the same problem.

 

Fuse version is hadoop-hdfs-fuse                      2.6.0+cdh5.4.5+626-1.cdh5.4.5.p0.8~trusty-cdh5.4.5 over Ubuntu 14.04

 

This is a trace of fuse connector before die:

 

LOOKUP /user/playgiga/ProfilesQA/00000000-0000-0000-0000-000000000320
getattr /user/playgiga/ProfilesQA/00000000-0000-0000-0000-000000000320
unique: 190741, error: -2 (No such file or directory), outsize: 16
unique: 190742, opcode: LOOKUP (1), nodeid: 13, insize: 77, pid: 22655
LOOKUP /user/playgiga/ProfilesQA/00000000-0000-0000-0000-000000000320
getattr /user/playgiga/ProfilesQA/00000000-0000-0000-0000-000000000320
unique: 190742, error: -2 (No such file or directory), outsize: 16
unique: 190732, success, outsize: 16
DEQUEUE PATH1 4543 (w)
PATH2 2465
rename /user/playgiga/ProfilesQA/319/192.vhd.0000000373 /user/playgiga/ProfilesQA/319/192.vhd
fuse_dfs: fuse.c:926: unlock_path: Assertion `node->treelock != 0' failed.
/usr/bin/hadoop-fuse-dfs: line 29: 4746 Aborted (core dumped) env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@

 

 

Any hint?

At this moment we are using NFS instead of fuse. But performance is far worse since all requests are router though an NFS gateway instead

accesing directly to datanodes.

 

Thanks in advance.

2 REPLIES 2

Re: HDFS Fuse connector dies

Master Collaborator
did you find a solution for this?
thanks
tomas

Re: HDFS Fuse connector dies

New Contributor

For the fuse side, impossible. I downloaded and compiled latest HDFS version from Apache with native libs but the problem persists. I think is related with high concurrency tasks we launch against fuse mount point. With lightweight task it runs for days or weeks without problems. However, in the long term, it fails. Probably due to some memory leaks.

 

Workaround: install local NFS gateway and mount localhost via NFS. This way everything runs as expected.