Created 02-21-2018 11:16 PM
I just deployed my cluster on Azure using Cloudbreak. Right now my name node and datanode directories are: /hadoopfs/fs1/hdfs/namenode and /hadoop/hdfs/data,/hadoopfs/fs1/hadoop/hdfs/data,/mnt/resource/hadoop/hdfs/data respectively. I want to change this to a location on my Data Lake Store.
When I just paste the path starting with "adl:/" it throws an error. I setup the access and everything. I just want to change the directories to save space on my local FS.
Also, I am getting alerts like:
Capacity Used:[100%, 28672], Capacity Remaining:[0].
I have 110 GB RAM on my master node. Why is it already used up?
Created 02-23-2018 01:58 PM
Have you set fs.defaultFS to ADLS as well? If so, note that it is not an official setup supported by Hortonworks, and it might cause HDFS calculating free space incorrectly.
Hope this helps!
Created 02-23-2018 06:09 PM
I did. I just had to change the value of Reserved Space for HDFS in HDFS configs. Thanks for commenting.