Created on 09-07-2018 09:06 AM - edited 09-16-2022 06:40 AM
Hi all,
I am new to Hortonworks, my task is managing our cluster in office, however we plan to mount all other hard drives on our server under /data0 /data1 /data2 and so on. My question is will Hortonworks automatically create hdfs in those folder?
I have done a single node test installation, I notice the /hadoop folder is created.
Created 09-07-2018 11:20 AM
Ambari will by default pick up the mount points and configure them to appropriate services. eg: For HDFS ambari configures dfs.datanode.data.dir and dfs.namenode.data.dir with all the mount points. Do when you start using HDFS you should see data insode your /data0 /data1 /data2 and so on.
Hope this helps.
Created 09-07-2018 11:20 AM
Ambari will by default pick up the mount points and configure them to appropriate services. eg: For HDFS ambari configures dfs.datanode.data.dir and dfs.namenode.data.dir with all the mount points. Do when you start using HDFS you should see data insode your /data0 /data1 /data2 and so on.
Hope this helps.
Created 09-11-2018 01:54 AM
Thank you, now I am clear.
Created 09-14-2018 09:28 AM
@Ronnie 10 , Do consider accepting the answer if it helped you 🙂