Reply
Contributor
Posts: 34
Registered: ‎01-11-2016

HDFS 3 replicas despite setrep -w 2 for Spark checkpoint

[ Edited ]

Hi

 

I am building a dashboard based on streaming data. 

I use Cloudera 5.10, Spark 2.1 and Kafka 0.10.

 

I wanted to give little boost to my app and one of optimizations was to put checkpoint dir to SSD drives.

I have added fresh and empty nodes (4 of them) with volumes prefixed as SSD (256GB each). All assigned to the same rack.

I have set policy on hdfs:///spark/checkpoints/myapp to ALL_SSD

hdfs storagePolicies -setStragePolicy -path hdfs:///spark/checkpoints/myapp -policy ALL_SSD

I have changed replication factor for this dir from default 3 to 2 with command

hdfs dfs -setrep -w 2 hdfs:///spark/checkpoints/myapp

 

And after I star my app

hdfs fsck hdfs:///spark/checkpoints/myapp -files -blocks -locations

says that blocks has 3 replicas (2 on SSD and 1 on DISK).

 

Problem 1:

2SSD and 1DISK instead expected 3SSD (if replication factor =3) - I suspect problem with racks, because according to replication algoritm  (2copies in the same rack and 3rd to the node in the other rack), Fallback storages for replication setting (DISK) for ALL_SSD policy could have been used here.
I will try to reorganize nodes placement and spread them over 2 racks  - does any one has some other ideas why I got 1 DISK?

 

Problem 2:

Why I have 3 replicas despite I've set manually setrep -w 2 at the parent directory? 

 

Best Regards

Artur

Posts: 1,730
Kudos: 357
Solutions: 274
Registered: ‎07-31-2013

Re: HDFS 3 replicas despite setrep -w 2 for Spark checkpoint

> Problem 1

Do you have rack awareness configured, i.e. do you have 2 or more racks configured for your DataNode hosts, and are your 4 1x-SSD-volume hosts in the same rack? HDFS will try never to place all 3 replicas in the same rack if it finds 2 or more racks available, to ensure availability.

Another reason HDFS may omit a specified storage volume type is if it is deemed unfit (low space for target block size, failed volume, etc.).

Once you've determined and resolved the potential suspects above, you should be able to use the HDFS mover command to thoroughly enforce the storage policy on violated files.

> Problem 2

Replication factor cannot be applied to directories the way storage policies can - https://community.cloudera.com/t5/Storage-Random-Access-HDFS/Specific-replication-number-for-a-folde.... It remains per-file (and per-file-write), so your application's HDFS writer must be configured appropriately to use lower replication. For example, the FileSystem's create methods accept an arbitrary replication factor to use: http://archive.cloudera.com/cdh5/cdh/5/hadoop/api/org/apache/hadoop/fs/FileSystem.html#create(org.ap...)
Announcements