Created on 04-05-201804:41 PM - edited 08-17-201908:06 AM
Legacy applications that lack the ability to leverage the REST interface for HDFS can limit data access for storage.
One of the ways to deal with this is a mountable filesystem method for Hadoop. Additions to Linux kernel for native HDFS access are not yet there. Therefore we need to use the “user space” capabilities with something like FUSE to provide this functionality.
But before we begin, we need to understand some limitations of the HDFS-FUSE implementation:
Because this runs in user space (vs kernel space), performance is not on par with what native API implementations can provide
For large data transfers, continuous ingest, and in particular - streaming operations, users should look at solutions like Apache NIFI
And last but not least, the use of HDFS-FUSE does not currently support “append” operations for a file
So with those limitations understood, lets begin getting things setup on our cluster.
To begin with, we need to install the packages from the HDP repositories.
This article is going to focus on HDP 2.6.4, but the same holds true for earlier releases (HDP 2.5.0 has been tested and works similarly)
We are also going to assume that users have been added to the cluster and have access to both local directories as well as HDFS storage.
As root (or with elevated privileges), install the requisite packages.
[root@scratch ~]# yum install hadoop-hdfs-fuse
You may need to validate the paths for both “PATH” and “LD_LIBRARY_PATH” to the location for the requisite Hadoop & Java libraries and executables.
If using the Oracle JDK & HDP 2.6.4, they might look similar to this:
Now we need to create our mount point on the Linux filesystem that users can access.
This example uses a single active NameNode.
Be sure to verify that we can see the file from command line.
Now we will upload a file through the Ambari Files Views first to validate our access (a simple CSV file is sufficient)
[demouser@scratch ~]$ ls -l /hadoop-fuse/user/demouser total 2 -rw-r--r-- 1 demouser hadoop 2641 Apr 3 14:45 sample_color.csv
Now we can copy a file from our local filesystem into our HDFS user directory.
And finally, we will verify the contents of the file uploaded via command line by previewing it in Ambari.
And if we want to test out creating a HIVE table from out data set loaded from Linux command line: (test.csv was loaded in via command line using just the copy command)
And now we can inspect (sample) the data in our table.
Once you are satisfied with the location and permissions, you can have this mount at boot or run it as part of a secondary startup script (i.e. rc.local if it is enabled on CentOS/RHEL 7+) to mount on reboot. But it is best to wait until the NameNode is up and running before your proceed with this automation.