In production setups are files loaded into HDFS from a particular machine?
If so, if that machine were also a data node then would not that machine be identified as a co-located client - thus prevent data distribution across the cluster?
Or is the standard practice to load the files from the name node host?
Or what other practice is commonly used for loading files into HDFS?
Appreciate the insights.
I think there is confusion on how we are defining colocated clients. This is because of the way I first understood and answered your question. Your edge nodes are different from what Hadoop calls colocated client which is probably what you have read somewhere.
When a Map Reduce job runs, it spawns number of mappers. each mapper is usually reading data on the node it is running on (the local data). Each of these mappers is a client or colocated client assuming its reading local data. However, these mappers were not taking advantage of the fact that data they are reading is local to where these mappers are running (A mapper in some cases might read data from a remote machine which is going to be inefficient if it happens). The mappers were using the same TCP protocol in reading local data as they would use to read remote data. It was determined that performance can be improved about 20-30% only by making these mappers read data blocks directly off of disk if they can be made aware of the local data. Hence this change was made to improve that performance. If you would like more details, please see the following JRA.
If you scroll down in this jira then you will see a design document. That design document will clear any confusion you may have.
In production, you would have "edge nodes" where you have client programs install and they are talking to the cluster. But even if you put data in local file system on data node and then copy into HDFS, it will not prevent data distribution. The client file is in local file system (XFS, ext4) which is unrelated to HDFS (well not exactly, but as far as your question is concerned).
Standard practice is to use Edge node and not name node.
If moving files into hdfs from datanode will not prevent distribution then when does co-located client dynamic work?
Also is the edge node that you mention a datanode? If not is it simply a machine with hadoop software to facilitate interaction with hdfs?
Appreciate the feedback.
whether the 'edge node' is a datanode?
No. You can if you want, put edge processes like client configs to run client programs on the same node as data node but that doesn't make data node an edge node. Ideally this is not recommended but if you have very small cluster, then sure, no problem with that.
So what is required for the edge node to connect to the cluster : hadoop software, core-site.xml, hdfs-site.xml, ... and what else ?
Appreciate the clarification.
You can also securely do this via a REST API over HTTP from any node:
2. HttpFS - If you plan on using WebHDFS in a High Availability cluster (Active and Passive NameNodes)
You can also implement Knox for a single and secure Rest access point (with different port numbers) for: - Ambari - WebHDFS - HCatalog - HBase - Oozie -Hive -Yarn -Resource Manager -Storm http://hortonworks.com/apache/knox-gateway/