Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2445 | 04-27-2020 03:48 AM | |
4881 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3219 | 04-13-2020 08:53 PM | |
4925 | 03-31-2020 02:10 AM |
06-17-2019
05:28 AM
@Michael Bronson The error that you see in the Ambari UI while adding the "" seems to be due to some other Inconsistency in the data. The configuration changes could not be validated for consistency due to an unknown error. Your changes have not been saved yet. Would you like to proceed and save the changes? . Please check and share the complete Ambari Server.log after attempting to enabkle that property ./.. I am suspecting that the "Consistency Check Failure" might be some other inconsistency in your config.
... View more
06-17-2019
05:21 AM
1 Kudo
@Michael Bronson the parameter "dfs.namenode.fs-limits.max-directory-items " is HDFS specific hence the & HDFS dependent services and HDFS Dependent service components needs to be restarted. In Ambari UI it will show the required service components that needs to be restarted. No need to restart Ambari Server.
... View more
06-17-2019
05:05 AM
@Michael Bronson Looks good. Yes in your command mycluster need to be replaced with hdfsha
... View more
06-17-2019
12:34 AM
In case of NameNode enabled cluster the "dfs.nameservices" is defined. so based on the "dfs.nameservices" the "fs.defaultFS" is determined. For example if "dfs.nameservices=mycluster" then the "fs.defaultFS" will be ideally "hdfs://mycluster" If there is No NameNode HA enabled then the "fs.defaultFS" will be pointing to NameNode host/port
... View more
06-17-2019
12:31 AM
@Michael Bronson "Mycluster" needs to be replaced with the "fs.defaultFS" parameter of your HDFS config.
... View more
06-16-2019
11:40 PM
@Michael Bronson Some third party doc reference might give you some idea on that. https://blogs.msdn.microsoft.com/bigdatasupport/2016/08/15/hdfs-gets-full-in-azure-hdinsight-with-many-hive-temporary-files/
... View more
06-16-2019
11:38 PM
@Michael Bronson Without testing .. i can not say for sure if something will work or not. But a this point i believe in the documentation. If something is written in the Doc like following then ideally it should work :https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-ScratchDirectoryManagement Until unless there is a BUG with the tool reported somewhere for that tool. I do not find any bug reported for that tool ... So i Believe in that tool until i find a BUG ... or If you find a BUG with that tool then please report it.
... View more
06-16-2019
11:31 PM
@Michael Bronson As mentioned earlier that the parameters "hive.server2.clear.dangling.scratchdir" and "hive.server2.clear.dangling.scratchdir.interval" to HiveConf.java are added from hive 1.3.0 and 2.2.0. But as you are using lower version of Hive-1.2.1.2.6 (HDP 2.5) https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.6/bk_release-notes/content/comp_versions.html Hence those parameters may not take effect because they will be present from "hive 1.3.0 and 2.2.0" version (See: https://jira.apache.org/jira/browse/HIVE-15068) and above. You will have to rely on tools like "cleardanglingscratchdir" .
... View more
06-16-2019
11:19 PM
@Michael Bronson As per this JIRA: https://jira.apache.org/jira/browse/HIVE-15068 This adds "hive.server2.clear.dangling.scratchdir" and "hive.server2.clear.dangling.scratchdir.interval" to HiveConf.java are added from hive 1.3.0 and 2.2.0. So for safe cleaning of the scratch dir you might want to refer to : https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-ScratchDirectoryManagement # hive --service cleardanglingscratchdir [-r] [-v] [-s scratchdir]
... View more
06-16-2019
11:09 PM
And hive.start.cleanup.scratchdir=true
... View more