Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Multiple NFS Gateways for HDFS

avatar

Is it possible to have multiple NFS Gateways on different nodes on a single cluster?

1 ACCEPTED SOLUTION

avatar
Master Mentor
@bsaini

Yes, I just added 2 NFS gateways using ambari in the same cluster.

2082-screen-shot-2016-02-11-at-80137-pm.png

2083-screen-shot-2016-02-11-at-80246-pm.png

View solution in original post

3 REPLIES 3

avatar
Master Mentor
@bsaini

Yes, I just added 2 NFS gateways using ambari in the same cluster.

2082-screen-shot-2016-02-11-at-80137-pm.png

2083-screen-shot-2016-02-11-at-80246-pm.png

avatar
Master Mentor
@bsainiIn the first phase, we have enabled NFSv3 interface access to HDFS. This is done using NFS Gateway, a stateless daemon, that translates NFS protocol to HDFS access protocols as shown in the following diagram. Many instances of such daemon can be run to provide high throughput read/write access to HDFS from multiple clients. As a part of this work, HDFS now has a significant functionality that supports inode ID or file handles, that was done in Apache JIRA HDFS-4489.

Source: http://hortonworks.com/blog/simplifying-data-management-nfs-access-to-hdfs/

avatar
Rising Star

Hi,

If I understand we can start multiple nfs gateway server on multiple servers (datanode, namenode, client hdfs).

if we have (servernfs01, servernfs02, servernfs03) and (client01, client02)

client01# : mount -t nfs servernfs01:/ /test01
client02# : mount -t nfs servernfs02:/ /test02

My question is how to avoir a service interruption ? What's happened if servernfs01 is failed ?

How to keep access to hdfs for client01, in this case ?