Block It is the physical representation of data. It contains a minimum amount of data that can be read or write. The default size of the HDFS block is 128 MB which we can configure as per our requirement. All blocks of the file are of the same size except the last block, which can be of same size or smaller. The files are split into 128 MB blocks and then stored into Hadoop filesystem. InputSplit It is the logical representation of data present in the block. It is used during data processing in the MapReduce program or other processing techniques. InputSplit doesn’t contain actual data, but a reference to the data. By default, split size is approximately equal to block size. InputSplit is user-defined and the user can control split size based on the size of data in the MapReduce program.