Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Namenode throughput in HDFS federation

Highlighted

Namenode throughput in HDFS federation

New Contributor

Hi,

I want to share the Namenode load so that I can achieve throughput for large incoming client requests for HDFS read & write operations.

Can I point my one java application to 2 different Namenodes in the federated cluster, so that all the active namenodes can share the load for incoming read & write requests i.e. participate for all the requests from that one application .

2 REPLIES 2

Re: Namenode throughput in HDFS federation

@Viraj Vekaria

Namenode Federation is the assignment of directories to a pair of HA NN's, like a mount point on linux. The process of directing clients to the right Namenode is the responsibility of the HDFS client. And if you're using the standard HDFS libraries, then having the "properly" configured hdfs-site.xml file in the path of the java client will handle that transparently. (You do not need to alter client code)

http://hortonworks.com/blog/an-introduction-to-hdfs-federation/

Re: Namenode throughput in HDFS federation

New Contributor

@jss: I haven't config the HA right now, its just NN federation, I am testing the scenario, for the HA I know that client will handle it automatically if one of active NN goes down, but here what I am asking is all NN is active and I want to point all to same application. Now is it possible to handle this requests across all these active NN's ?

Don't have an account?
Coming from Hortonworks? Activate your account here