Support Questions

Find answers, ask questions, and share your expertise

HDFS - MapReduce -> Basic Questions

avatar
Rising Star
Hi experts,
I have some basic questions about the relationship between MapReduce and HDFS:
  • The Data File placing on HDFS is through MapReduce?
  • All transactions in HDFS are using MapReduce jobs?

Anyone knows the answer? Many thanks!

1 ACCEPTED SOLUTION

avatar
Super Guru

HDFS is used for storage so that it could provide data redundancy along with the support for parallelism at the time of read and write. mapreduce is a computation framework which allow you to process and generating large data sets with a parallel, distributed algorithm on a cluster. there are other framework also which can provide the same like spark and Tez.

for your specific questions

1. The Data File placing on HDFS is through MapReduce? you are not limited to write to HDFS using mapreduce only, you can take the advantage of the other framework to read and write.

2. All transactions in HDFS are using MapReduce jobs? NO

View solution in original post

3 REPLIES 3

avatar
Super Guru

HDFS is used for storage so that it could provide data redundancy along with the support for parallelism at the time of read and write. mapreduce is a computation framework which allow you to process and generating large data sets with a parallel, distributed algorithm on a cluster. there are other framework also which can provide the same like spark and Tez.

for your specific questions

1. The Data File placing on HDFS is through MapReduce? you are not limited to write to HDFS using mapreduce only, you can take the advantage of the other framework to read and write.

2. All transactions in HDFS are using MapReduce jobs? NO

avatar
Rising Star

But when you are putting a file into HDFS, you're using a MapReduce job (even if you don't see)?

avatar
Super Guru

you mean to say like put file using hadoop fs -put? if so then 'no' it take the advantage of filesystem api to write on hdfs