In case of NameNode enabled cluster the "dfs.nameservices" is defined. so based on the "dfs.nameservices" the "fs.defaultFS" is determined.
For example if "dfs.nameservices=mycluster" then the "fs.defaultFS" will be ideally "hdfs://mycluster"
If there is No NameNode HA enabled then the "fs.defaultFS" will be pointing to NameNode host/port
@Jay , in my cluster ( HDFS --> config ) I see dfs.nameservices=hdfsha ,
so it should be like this?
hadoop fs -rm -r -skipTrash hdfs://hdfsha/tmp/hive/hive/
@Jay actually it should be like this
hadoop fs -rm -r -skipTrash hdfs://hdfsha/tmp/hive/hive/*
need to add the "*" after slash in order to delete only the folders under /tmp/hive/hive and not the sub folder itself (/tmp/hive/hive)
@jay - nice
I see there the option:
hadoop fs -rm -r -skipTrash hdfs://mycluster/tmp/hive/hive/
this option will remove all folders under /tmp/hive/hive
but what is the value - mycluster ? ( what I need to replace instead that ?
Whenever you change the parameter this config the cluster needs to be aware of the changes. When you start Ambari the underlying components don't get started unless you explicitly start those components!
So you can start Ambari without stating YARN or HDFS
@Geoffrey Shelton Okot - do you mean to restart the ambari server ? as ambari server restart? instead to restart the HDFS and YARN services ? ( after we set dfs.namenode.fs-limits.max-directory-items )
the parameter "dfs.namenode.fs-limits.max-directory-items " is HDFS specific hence the & HDFS dependent services and HDFS Dependent service components needs to be restarted. In Ambari UI it will show the required service components that needs to be restarted.
No need to restart Ambari Server.
second when I saved the parameter in ambari - Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site
The configuration changes could not be validated for consistency due to an unknown error. Your changes have not been saved yet. Would you like to proceed and save the changes?
dose this parameter supported in HDP version - 2.6.4?