Created 02-01-2017 01:30 PM
In production setups are files loaded into HDFS from a particular machine?
If so, if that machine were also a data node then would not that machine be identified as a co-located client - thus prevent data distribution across the cluster?
Or is the standard practice to load the files from the name node host?
Or what other practice is commonly used for loading files into HDFS?
Appreciate the insights.
Created 02-04-2017 04:09 AM
I think there is confusion on how we are defining colocated clients. This is because of the way I first understood and answered your question. Your edge nodes are different from what Hadoop calls colocated client which is probably what you have read somewhere.
When a Map Reduce job runs, it spawns number of mappers. each mapper is usually reading data on the node it is running on (the local data). Each of these mappers is a client or colocated client assuming its reading local data. However, these mappers were not taking advantage of the fact that data they are reading is local to where these mappers are running (A mapper in some cases might read data from a remote machine which is going to be inefficient if it happens). The mappers were using the same TCP protocol in reading local data as they would use to read remote data. It was determined that performance can be improved about 20-30% only by making these mappers read data blocks directly off of disk if they can be made aware of the local data. Hence this change was made to improve that performance. If you would like more details, please see the following JRA.
https://issues.apache.org/jira/browse/HDFS-347
If you scroll down in this jira then you will see a design document. That design document will clear any confusion you may have.
Created 02-03-2017 06:38 PM
BTW what is the actual advantage of a co-located client? It only stores the first block of the file at the client/datanode right? The rest of the blocks are distributed across the HDFS. So what is the big advantage of storing the first block? Does that really help performance?
Appreciate the insights.
Created 02-03-2017 07:04 PM
Starting from version 2.1, you should see better read performance for colocated clients. For write, not so much. It's the read that will be faster because the client is on the same machine as the data block.
If my answer helped, please accept.
Created 02-03-2017 07:59 PM
But are people going to access only one block? Big Data itself implies processing of thousands of blocks.
So why would the faster access of one single block matter?
Created 02-04-2017 04:09 AM
I think I understand your point. See my new answer.
Created 02-04-2017 04:09 AM
I think there is confusion on how we are defining colocated clients. This is because of the way I first understood and answered your question. Your edge nodes are different from what Hadoop calls colocated client which is probably what you have read somewhere.
When a Map Reduce job runs, it spawns number of mappers. each mapper is usually reading data on the node it is running on (the local data). Each of these mappers is a client or colocated client assuming its reading local data. However, these mappers were not taking advantage of the fact that data they are reading is local to where these mappers are running (A mapper in some cases might read data from a remote machine which is going to be inefficient if it happens). The mappers were using the same TCP protocol in reading local data as they would use to read remote data. It was determined that performance can be improved about 20-30% only by making these mappers read data blocks directly off of disk if they can be made aware of the local data. Hence this change was made to improve that performance. If you would like more details, please see the following JRA.
https://issues.apache.org/jira/browse/HDFS-347
If you scroll down in this jira then you will see a design document. That design document will clear any confusion you may have.