Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (2)
avatar

Reading and writing files to a MapR cluster (version 6) is simple, using the standard PutFile or GetFile, utilizing the MapR NFS.

If you've searched high and low on how to do this, you've likely read articles and GitHub projects specifying steps. I've tried these steps without success, meaning whats out there is too complicated or out-dated to solve NiFi reading/writing to MapR. You don't need to re-compile the HDFS processors with the MapR dependencies, just follow the steps below:

1) Install the MapR client on each NiFi node

#Install syslinux (for rpm install)
sudo yum install syslinux
#Download the RPM for your OS http://package.mapr.com/releases/v6.0.0/redhat/ 
rpm -Uvh mapr-client-6.0.0.20171109191718.GA-1.x86_64.rpm
#Configure the mapr client connecting with the cldb
/opt/mapr/server/configure.sh -c -N ryancicak.com -C cicakmapr0.field.hortonworks.com:7222 -genkeys -secure
#Once you have the same users/groups on your OS (as MapR), you will be able to use maprlogin password (allowing you to login with a Kerberos ticket)
#Prove that you can access the MapR FS
hadoop fs -ls / 

2) Mount the MaprR FS on each NiFi node

sudo mount -o hard,nolock cicakmapr0.field.hortonworks.com:/mapr /mapr

*This will allow you to access the MapRFS on the mount point /mapr/yourclustername.com/location

3) Use the PutFile and GetFile processor referencing the /mapr directory on your NiFi nodes

*Following 1-3 allows you to quickly read/write to MapR, using NiFi.

2,324 Views