We are aware that all the data is stored in disk drives. The conundrum is that disk seeks times (latency of the disk operation) have not improved at the rate at which transfer rate (disk’s bandwidth) have reduced. So if a disk access operation comprises of more seeks, it would take a longer time to write or read through datasets than what it would take to stream through it. So for updating a small size of data to database, a traditional RDBMS will work just fine. However, in case of major data updates in database, MapReduce will be more optimized because it uses sort/merge to update complete database in one go. MapReduce trumps Traditional RDBMS on the following points: 1) MapReduce is a good fit for problems where there is a need to analyze a complete dataset in batch mode, particularly for ad hoc analysis. 2) Map Reduce suits application where data is written once and read many times. 3) Map Reduce is able to process Petabytes of data in a parallel fashion.
... View more