Member since
02-21-2016
14
Posts
21
Kudos Received
0
Solutions
05-02-2016
08:45 AM
1) You can enable some parameters like huge file support in Linux to make it faster but nothing near 128MB. You just don;t get much benefit there If you want to know how to tune the filesystem you can refer to our reference architecture Appendix B https://hortonworks.com/wp-content/uploads/2013/10/4AA5-9017ENW.pdf - Mounting filesystems with nodiratime, noatime - Change block readahead to 8k Some file systems can also configure huge file support. 2) Windows Basically yes. NTFS has something they call a cluster. But it seems to be very similar to a block https://support.microsoft.com/en-gb/kb/140365
... View more
03-02-2016
09:49 PM
Hi, Is there a way to get rid of (OR prevent generation of) the .<filename>.crc files which are getting generated while using a java filesystem client to write file to a local linux file system? I've already tried using RawLocalFileSystem in place of LocalFile System . Even tried setting the property fs.file.impl to the value org. apache. hadoop.fs.RawLocalFileSystem (as suggested in https://www.safaribooksonline.com/library/view/hadoop-the-definitive/9781449328917/ch04.html) but without any success. The .crc files are still getting generated on the local linux file system. Any help on this would be really appreciated. Regards, Abu
... View more
04-16-2018
10:44 AM
The DataNode stores a single ".meta" file corresponding to each block replica or For each block replica hosted by a DataNode, there is a corresponding metadata file which is true?
... View more