I want to share the Namenode load so that I can achieve throughput for large incoming client requests for HDFS read & write operations.
Can I point my one java application to 2 different Namenodes in the federated cluster, so that all the active namenodes can share the load for incoming read & write requests i.e. participate for all the requests from that one application .
Namenode Federation is the assignment of directories to a pair of HA NN's, like a mount point on linux. The process of directing clients to the right Namenode is the responsibility of the HDFS client. And if you're using the standard HDFS libraries, then having the "properly" configured hdfs-site.xml file in the path of the java client will handle that transparently. (You do not need to alter client code)
@jss: I haven't config the HA right now, its just NN federation, I am testing the scenario, for the HA I know that client will handle it automatically if one of active NN goes down, but here what I am asking is all NN is active and I want to point all to same application. Now is it possible to handle this requests across all these active NN's ?