I've been able to set it up for SSD but can't get RAM_DISK to work. I can create the ramdisk at the linux level, attach a storage policy and write to it outside of Hadoop. It appears that HDP set’s up its dir structure on the ramdisk but all write attempts end up on regular [DISK]. When I attempt to specify the dfs.datanode.data.dir using the URI (i.e [RAM_DISK]file:///ramdisk/hdfs) as detailed in the doc mentioned above,the datanode fails to start, so I use [RAM_DISK]/ramdisk/hdfs.
The document mentioned above seems to have some inconsistencies:
a) Says to use dfs.data.dir not dfs.datanode.data.dir
b) Says to set dfs.checksum.type to NULL.
When the datanode is starting a warning message pops up that says the checksum type is invalid and is reverting to CRC.
Anyone with some experience to share on TIERED STORAGE?