Member since
09-29-2015
57
Posts
49
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1511 | 05-25-2017 06:03 PM | |
1367 | 10-19-2016 10:17 PM | |
1692 | 09-28-2016 08:41 PM | |
1022 | 09-21-2016 05:46 PM | |
4846 | 09-06-2016 11:49 PM |
04-06-2017
05:39 AM
Thanks Sowmya, for sharing in details. Can you please also share How replication / DistCp job works e.g. mapper writes to temp directory on source name-node and copy to target once done. What if jobs replicating 100 files. fails at mapper end. If a copier failed for some subset of its files what will happen, A directory will become inconsistent ? Is Atomic feature supported in HDP2.5, how the data inconsistency will be taken care in case of Job failure. E.g If there are 200 GB files in a directory source which has been changed and replication jobs replicating the data to target fails. In case 100 GB data has been written at target dirctory and fails. Will it be rolled back to the previous state of only 100 GB will be written at target ? Assumption : we have 100s of files to be transferred, this file size is relatively bigger (130 GB), Block size is 124MB. overwrite = true.
... View more
09-22-2016
05:40 AM
@Sowmya Ramesh So as of now we can't stop or suspend all entity at once. Is there any plan in future to implement such features ?
... View more