Created 01-09-2017 04:41 AM
I have two question about dfs.replication parameter:
1. I know default of replication block is 3. But when I configure dfs.replication=1, Do it affected to cluster performance.
2. I have a lot of data with configure dfs.replication=1, and now I change configure to dfs.replication= 3. So my data will auto replicate or I have to build my data again to replication running. I need to be sure because my data is very important.
P/S: any best practice for dfs.replication configure.
Created 01-09-2017 05:53 AM
1. I know default of replication block is 3. But when I configure dfs.replication=1, Do it affected to cluster performance.
Since you are not replicating, your writes will be faster at the expense of significant risk of data loss as well as read performance. Your reads can be slow because your data might happen to be on a node experiencing issues with no other block available as well as job failure in case of just one node failure.
2. I have a lot of data with configure dfs.replication=1, and now I change configure to dfs.replication= 3. So my data will auto replicate or I have to build my data again to replication running. I need to be sure because my data is very important.
Use setrep to change replication factor for existing files. It will replicate existing data (you will have to provide the path).
hadoop fs -setrep [-R] [-w] <numReplicas> <path>
P/S: any best practice for dfs.replication configure.
Always use default replication factor of 3. It provides data resiliency as well as redundancy in case of node failures. It also helps read performance. In rare cases, you can increase replication factor to help even more data distribution to make reads faster.
Created 01-09-2017 05:53 AM
1. I know default of replication block is 3. But when I configure dfs.replication=1, Do it affected to cluster performance.
Since you are not replicating, your writes will be faster at the expense of significant risk of data loss as well as read performance. Your reads can be slow because your data might happen to be on a node experiencing issues with no other block available as well as job failure in case of just one node failure.
2. I have a lot of data with configure dfs.replication=1, and now I change configure to dfs.replication= 3. So my data will auto replicate or I have to build my data again to replication running. I need to be sure because my data is very important.
Use setrep to change replication factor for existing files. It will replicate existing data (you will have to provide the path).
hadoop fs -setrep [-R] [-w] <numReplicas> <path>
P/S: any best practice for dfs.replication configure.
Always use default replication factor of 3. It provides data resiliency as well as redundancy in case of node failures. It also helps read performance. In rare cases, you can increase replication factor to help even more data distribution to make reads faster.
Created on 01-09-2017 06:05 AM - edited 08-19-2019 03:41 AM
Thank you for your answers.
I want ask one more question.
If I change just only on Ambari UI. So Is it equal with I used setrep command ? Or I need configure on Ambari UI before use setrep
Created 01-09-2017 06:10 AM
No, Ambari UI will set it for future files that you will create. It will not run setrep command for you. That you will have to run from shell as described above.