<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Hive MetaStore always fails to start in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Hive-MetaStore-always-fails-to-start/m-p/296110#M218115</link>
    <description>&lt;P&gt;Thank You&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/60150"&gt;@stevenmatison&lt;/a&gt;&amp;nbsp;its working now.&lt;/P&gt;</description>
    <pubDate>Mon, 18 May 2020 10:31:35 GMT</pubDate>
    <dc:creator>Udhav</dc:creator>
    <dc:date>2020-05-18T10:31:35Z</dc:date>
    <item>
      <title>Hive MetaStore always fails to start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-MetaStore-always-fails-to-start/m-p/295545#M217801</link>
      <description>&lt;P&gt;Hello,td&lt;/P&gt;&lt;P&gt;Whenever i add hive service into my cluster it always fails to start.&lt;/P&gt;&lt;P&gt;following is the stderr and stdout.&lt;/P&gt;&lt;P&gt;Can someone help me resolve this issue?&lt;/P&gt;&lt;P&gt;Thank you in advance.&lt;/P&gt;&lt;PRE&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;stderr:&lt;/STRONG&gt; &lt;/FONT&gt;&lt;BR /&gt;Traceback (most recent call last):&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 995, in restart&lt;BR /&gt;self.status(env)&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 87, in status&lt;BR /&gt;check_process_status(status_params.hive_metastore_pid)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 43, in check_process_status&lt;BR /&gt;raise ComponentIsNotRunning()&lt;BR /&gt;ComponentIsNotRunning&lt;BR /&gt;&lt;BR /&gt;The above exception was the cause of the following exception:&lt;BR /&gt;&lt;BR /&gt;Traceback (most recent call last):&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 201, in &amp;lt;module&amp;gt;&lt;BR /&gt;HiveMetastore().execute()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute&lt;BR /&gt;method(env)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1006, in restart&lt;BR /&gt;self.start(env, upgrade_type=upgrade_type)&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 61, in start&lt;BR /&gt;create_metastore_schema() # execute without config lock&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py", line 487, in create_metastore_schema&lt;BR /&gt;user = params.hive_user&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__&lt;BR /&gt;self.env.run()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run&lt;BR /&gt;self.run_action(resource, action)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action&lt;BR /&gt;provider_action()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run&lt;BR /&gt;returns=self.resource.returns)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner&lt;BR /&gt;result = function(command, **kwargs)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call&lt;BR /&gt;tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper&lt;BR /&gt;result = _call(command, **kwargs_copy)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call&lt;BR /&gt;raise ExecutionFailed(err_msg, code, out, err)&lt;BR /&gt;resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.&lt;BR /&gt;SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.&lt;BR /&gt;SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]&lt;BR /&gt;Initializing the schema to: 3.1.1000&lt;BR /&gt;Metastore connection URL: jdbc:mysql://localhost/hive?createDatabaseIfNotExist=true&lt;BR /&gt;Metastore Connection Driver : com.mysql.jdbc.Driver&lt;BR /&gt;Metastore connection User: hive&lt;BR /&gt;Thu May 07 01:06:19 IST 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.&lt;BR /&gt;Failed to get schema version.&lt;BR /&gt;Underlying cause: java.sql.SQLException : Access denied for user 'hive'@'localhost' (using password: YES)&lt;BR /&gt;SQL Error code: 1045&lt;BR /&gt;org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.&lt;BR /&gt;at org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:94)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.getConnectionToMetastore(MetastoreSchemaTool.java:250)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.testConnectionToMetastore(MetastoreSchemaTool.java:333)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.tools.SchemaToolTaskInit.execute(SchemaToolTaskInit.java:53)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.run(MetastoreSchemaTool.java:446)&lt;BR /&gt;at org.apache.hive.beeline.schematool.HiveSchemaTool.main(HiveSchemaTool.java:138)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;at org.apache.hadoop.util.RunJar.run(RunJar.java:318)&lt;BR /&gt;at org.apache.hadoop.util.RunJar.main(RunJar.java:232)&lt;BR /&gt;Caused by: java.sql.SQLException: Access denied for user 'hive'@'localhost' (using password: YES)&lt;BR /&gt;at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965)&lt;BR /&gt;at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)&lt;BR /&gt;at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)&lt;BR /&gt;at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:873)&lt;BR /&gt;at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1710)&lt;BR /&gt;at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1226)&lt;BR /&gt;at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2188)&lt;BR /&gt;at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219)&lt;BR /&gt;at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014)&lt;BR /&gt;at com.mysql.jdbc.ConnectionImpl.&amp;lt;init&amp;gt;(ConnectionImpl.java:776)&lt;BR /&gt;at com.mysql.jdbc.JDBC4Connection.&amp;lt;init&amp;gt;(JDBC4Connection.java:47)&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)&lt;BR /&gt;at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)&lt;BR /&gt;at java.lang.reflect.Constructor.newInstance(Constructor.java:423)&lt;BR /&gt;at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)&lt;BR /&gt;at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:386)&lt;BR /&gt;at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330)&lt;BR /&gt;at java.sql.DriverManager.getConnection(DriverManager.java:664)&lt;BR /&gt;at java.sql.DriverManager.getConnection(DriverManager.java:247)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:88)&lt;BR /&gt;... 11 more&lt;BR /&gt;*** schemaTool failed ***&lt;BR /&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;stdout:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;2020-05-07 01:06:04,904 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -&amp;gt; 3.1.4.0-315&lt;BR /&gt;2020-05-07 01:06:04,917 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf&lt;BR /&gt;2020-05-07 01:06:05,089 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -&amp;gt; 3.1.4.0-315&lt;BR /&gt;2020-05-07 01:06:05,092 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf&lt;BR /&gt;2020-05-07 01:06:05,093 - Group['livy'] {}&lt;BR /&gt;2020-05-07 01:06:05,094 - Group['spark'] {}&lt;BR /&gt;2020-05-07 01:06:05,094 - Group['hdfs'] {}&lt;BR /&gt;2020-05-07 01:06:05,094 - Group['hadoop'] {}&lt;BR /&gt;2020-05-07 01:06:05,094 - Group['users'] {}&lt;BR /&gt;2020-05-07 01:06:05,095 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,095 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,096 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,096 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,097 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,097 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,098 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,098 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,099 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,100 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,100 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,101 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,102 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,102 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-05-07 01:06:05,103 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}&lt;BR /&gt;2020-05-07 01:06:05,104 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}&lt;BR /&gt;2020-05-07 01:06:05,108 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if&lt;BR /&gt;2020-05-07 01:06:05,108 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:05,109 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}&lt;BR /&gt;2020-05-07 01:06:05,110 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}&lt;BR /&gt;2020-05-07 01:06:05,110 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}&lt;BR /&gt;2020-05-07 01:06:05,116 - call returned (0, '1017')&lt;BR /&gt;2020-05-07 01:06:05,117 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}&lt;BR /&gt;2020-05-07 01:06:05,123 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if&lt;BR /&gt;2020-05-07 01:06:05,124 - Group['hdfs'] {}&lt;BR /&gt;2020-05-07 01:06:05,124 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}&lt;BR /&gt;2020-05-07 01:06:05,125 - FS Type: HDFS&lt;BR /&gt;2020-05-07 01:06:05,125 - Directory['/etc/hadoop'] {'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:05,137 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}&lt;BR /&gt;2020-05-07 01:06:05,137 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}&lt;BR /&gt;2020-05-07 01:06:05,155 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce &amp;amp;&amp;amp; getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}&lt;BR /&gt;2020-05-07 01:06:05,158 - Skipping Execute[('setenforce', '0')] due to not_if&lt;BR /&gt;2020-05-07 01:06:05,159 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:05,160 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:05,161 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:05,161 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:05,163 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}&lt;BR /&gt;2020-05-07 01:06:05,164 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}&lt;BR /&gt;2020-05-07 01:06:05,170 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:05,178 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}&lt;BR /&gt;2020-05-07 01:06:05,179 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:05,179 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}&lt;BR /&gt;2020-05-07 01:06:05,182 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:05,186 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:05,190 - Skipping unlimited key JCE policy check and setup since it is not required&lt;BR /&gt;2020-05-07 01:06:05,534 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf&lt;BR /&gt;2020-05-07 01:06:05,542 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}&lt;BR /&gt;2020-05-07 01:06:05,559 - call returned (0, 'hive-server2 - 3.1.4.0-315')&lt;BR /&gt;2020-05-07 01:06:05,560 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -&amp;gt; 3.1.4.0-315&lt;BR /&gt;2020-05-07 01:06:05,574 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://localhost:8080/resources/CredentialUtil.jar'), 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:05,576 - Not downloading the file from http://localhost:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists&lt;BR /&gt;2020-05-07 01:06:06,220 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1&amp;gt;/tmp/tmptbeEn2 2&amp;gt;/tmp/tmp7HHFD1''] {'quiet': False}&lt;BR /&gt;2020-05-07 01:06:06,233 - call returned (1, '')&lt;BR /&gt;2020-05-07 01:06:06,233 - Execution of 'cat /var/run/hive/hive.pid 1&amp;gt;/tmp/tmptbeEn2 2&amp;gt;/tmp/tmp7HHFD1' returned 1. cat: /var/run/hive/hive.pid: No such file or directory&lt;BR /&gt;&lt;BR /&gt;2020-05-07 01:06:06,233 - get_user_call_output returned (1, u'', u'cat: /var/run/hive/hive.pid: No such file or directory')&lt;BR /&gt;2020-05-07 01:06:06,234 - Execute['ambari-sudo.sh kill '] {'not_if': '! (ls /var/run/hive/hive.pid &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp; ps -p &amp;gt;/dev/null 2&amp;gt;&amp;amp;1)'}&lt;BR /&gt;2020-05-07 01:06:06,239 - Skipping Execute['ambari-sudo.sh kill '] due to not_if&lt;BR /&gt;2020-05-07 01:06:06,240 - Execute['ambari-sudo.sh kill -9 '] {'not_if': '! (ls /var/run/hive/hive.pid &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp; ps -p &amp;gt;/dev/null 2&amp;gt;&amp;amp;1) || ( sleep 5 &amp;amp;&amp;amp; ! (ls /var/run/hive/hive.pid &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp; ps -p &amp;gt;/dev/null 2&amp;gt;&amp;amp;1) )', 'ignore_failures': True}&lt;BR /&gt;2020-05-07 01:06:06,243 - Skipping Execute['ambari-sudo.sh kill -9 '] due to not_if&lt;BR /&gt;2020-05-07 01:06:06,244 - Execute['! (ls /var/run/hive/hive.pid &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp; ps -p &amp;gt;/dev/null 2&amp;gt;&amp;amp;1)'] {'tries': 20, 'try_sleep': 3}&lt;BR /&gt;2020-05-07 01:06:06,247 - File['/var/run/hive/hive.pid'] {'action': ['delete']}&lt;BR /&gt;2020-05-07 01:06:06,248 - Pid file /var/run/hive/hive.pid is empty or does not exist&lt;BR /&gt;2020-05-07 01:06:06,251 - Yarn already refreshed&lt;BR /&gt;2020-05-07 01:06:06,251 - HdfsResource['/user/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:06,254 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://localhost:50070/webhdfs/v1/user/hive?op=GETFILESTATUS&amp;amp;user.name=hdfs'"'"' 1&amp;gt;/tmp/tmpT6HbFH 2&amp;gt;/tmp/tmpdfPhxt''] {'logoutput': None, 'quiet': False}&lt;BR /&gt;2020-05-07 01:06:06,281 - call returned (0, '')&lt;BR /&gt;2020-05-07 01:06:06,281 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":19006,"group":"hdfs","length":0,"modificationTime":1588792770732,"owner":"hive","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')&lt;BR /&gt;2020-05-07 01:06:06,282 - HdfsResource['/warehouse/tablespace/external/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}&lt;BR /&gt;2020-05-07 01:06:06,283 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://localhost:50070/webhdfs/v1/warehouse/tablespace/external/hive?op=GETFILESTATUS&amp;amp;user.name=hdfs'"'"' 1&amp;gt;/tmp/tmprlGA1e 2&amp;gt;/tmp/tmpxKIV4S''] {'logoutput': None, 'quiet': False}&lt;BR /&gt;2020-05-07 01:06:06,311 - call returned (0, '')&lt;BR /&gt;2020-05-07 01:06:06,312 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":19010,"group":"hadoop","length":0,"modificationTime":1588792770904,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')&lt;BR /&gt;2020-05-07 01:06:06,312 - Skipping the operation for not managed DFS directory /warehouse/tablespace/external/hive since immutable_paths contains it.&lt;BR /&gt;2020-05-07 01:06:06,313 - HdfsResource['/warehouse/tablespace/managed/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0700}&lt;BR /&gt;2020-05-07 01:06:06,314 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://localhost:50070/webhdfs/v1/warehouse/tablespace/managed/hive?op=GETFILESTATUS&amp;amp;user.name=hdfs'"'"' 1&amp;gt;/tmp/tmph9qvm4 2&amp;gt;/tmp/tmpZjXRID''] {'logoutput': None, 'quiet': False}&lt;BR /&gt;2020-05-07 01:06:06,343 - call returned (0, '')&lt;BR /&gt;2020-05-07 01:06:06,343 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":19012,"group":"hadoop","length":0,"modificationTime":1588792771129,"owner":"hive","pathSuffix":"","permission":"700","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')&lt;BR /&gt;2020-05-07 01:06:06,343 - Skipping the operation for not managed DFS directory /warehouse/tablespace/managed/hive since immutable_paths contains it.&lt;BR /&gt;2020-05-07 01:06:06,344 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'hdfs getconf -confKey dfs.namenode.acls.enabled 1&amp;gt;/tmp/tmp3D8soo 2&amp;gt;/tmp/tmpNLPv95''] {'quiet': False}&lt;BR /&gt;2020-05-07 01:06:07,279 - call returned (0, '')&lt;BR /&gt;2020-05-07 01:06:07,280 - get_user_call_output returned (0, u'true', u'')&lt;BR /&gt;2020-05-07 01:06:07,280 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'hdfs getconf -confKey dfs.namenode.posix.acl.inheritance.enabled 1&amp;gt;/tmp/tmpbQX4bK 2&amp;gt;/tmp/tmpiOs0xI''] {'quiet': False}&lt;BR /&gt;2020-05-07 01:06:08,209 - call returned (0, '')&lt;BR /&gt;2020-05-07 01:06:08,210 - get_user_call_output returned (0, u'true', u'')&lt;BR /&gt;2020-05-07 01:06:08,210 - Execute['hdfs dfs -setfacl -m default:user:hive:rwx /warehouse/tablespace/external/hive'] {'user': 'hdfs'}&lt;BR /&gt;2020-05-07 01:06:10,250 - Execute['hdfs dfs -setfacl -m default:user:hive:rwx /warehouse/tablespace/managed/hive'] {'user': 'hdfs'}&lt;BR /&gt;2020-05-07 01:06:12,237 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}&lt;BR /&gt;2020-05-07 01:06:12,239 - Directories to fill with configs: [u'/usr/hdp/current/hive-metastore/conf', u'/usr/hdp/current/hive-metastore/conf/']&lt;BR /&gt;2020-05-07 01:06:12,240 - Directory['/etc/hive/3.1.4.0-315/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:12,240 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}&lt;BR /&gt;2020-05-07 01:06:12,249 - Generating config: /etc/hive/3.1.4.0-315/0/mapred-site.xml&lt;BR /&gt;2020-05-07 01:06:12,249 - File['/etc/hive/3.1.4.0-315/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}&lt;BR /&gt;2020-05-07 01:06:12,288 - File['/etc/hive/3.1.4.0-315/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,288 - File['/etc/hive/3.1.4.0-315/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:12,291 - File['/etc/hive/3.1.4.0-315/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,293 - File['/etc/hive/3.1.4.0-315/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,295 - File['/etc/hive/3.1.4.0-315/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,296 - File['/etc/hive/3.1.4.0-315/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,299 - File['/etc/hive/3.1.4.0-315/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,300 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://localhost:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}}&lt;BR /&gt;2020-05-07 01:06:12,308 - Generating config: /etc/hive/3.1.4.0-315/0/beeline-site.xml&lt;BR /&gt;2020-05-07 01:06:12,308 - File['/etc/hive/3.1.4.0-315/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}&lt;BR /&gt;2020-05-07 01:06:12,309 - File['/etc/hive/3.1.4.0-315/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,309 - Directory['/etc/hive/3.1.4.0-315/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:12,310 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}&lt;BR /&gt;2020-05-07 01:06:12,315 - Generating config: /etc/hive/3.1.4.0-315/0/mapred-site.xml&lt;BR /&gt;2020-05-07 01:06:12,316 - File['/etc/hive/3.1.4.0-315/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}&lt;BR /&gt;2020-05-07 01:06:12,353 - File['/etc/hive/3.1.4.0-315/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,353 - File['/etc/hive/3.1.4.0-315/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:12,356 - File['/etc/hive/3.1.4.0-315/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,358 - File['/etc/hive/3.1.4.0-315/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,360 - File['/etc/hive/3.1.4.0-315/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,361 - File['/etc/hive/3.1.4.0-315/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,363 - File['/etc/hive/3.1.4.0-315/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,363 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://localhost:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}}&lt;BR /&gt;2020-05-07 01:06:12,373 - Generating config: /etc/hive/3.1.4.0-315/0/beeline-site.xml&lt;BR /&gt;2020-05-07 01:06:12,373 - File['/etc/hive/3.1.4.0-315/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}&lt;BR /&gt;2020-05-07 01:06:12,375 - File['/etc/hive/3.1.4.0-315/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,375 - File['/usr/hdp/current/hive-metastore/conf/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_metastore/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640}&lt;BR /&gt;2020-05-07 01:06:12,376 - Writing File['/usr/hdp/current/hive-metastore/conf/hive-site.jceks'] because contents don't match&lt;BR /&gt;2020-05-07 01:06:12,376 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...}&lt;BR /&gt;2020-05-07 01:06:12,382 - Generating config: /usr/hdp/current/hive-metastore/conf/hive-site.xml&lt;BR /&gt;2020-05-07 01:06:12,382 - File['/usr/hdp/current/hive-metastore/conf/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}&lt;BR /&gt;2020-05-07 01:06:12,506 - Writing File['/usr/hdp/current/hive-metastore/conf/hive-site.xml'] because contents don't match&lt;BR /&gt;2020-05-07 01:06:12,509 - File['/usr/hdp/current/hive-metastore/conf//hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:12,510 - Writing File['/usr/hdp/current/hive-metastore/conf//hive-env.sh'] because contents don't match&lt;BR /&gt;2020-05-07 01:06:12,510 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}&lt;BR /&gt;2020-05-07 01:06:12,512 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,512 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://localhost:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,512 - Not downloading the file from http://localhost:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists&lt;BR /&gt;2020-05-07 01:06:12,512 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:12,513 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:12,514 - Directory['/var/lib/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:12,514 - XmlConfig['hivemetastore-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}&lt;BR /&gt;2020-05-07 01:06:12,523 - Generating config: /usr/hdp/current/hive-metastore/conf/hivemetastore-site.xml&lt;BR /&gt;2020-05-07 01:06:12,524 - File['/usr/hdp/current/hive-metastore/conf/hivemetastore-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}&lt;BR /&gt;2020-05-07 01:06:12,537 - File['/usr/hdp/current/hive-metastore/conf/hadoop-metrics2-hivemetastore.properties'] {'content': Template('hadoop-metrics2-hivemetastore.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}&lt;BR /&gt;2020-05-07 01:06:12,539 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}&lt;BR /&gt;2020-05-07 01:06:12,540 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}&lt;BR /&gt;2020-05-07 01:06:12,542 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}&lt;BR /&gt;2020-05-07 01:06:12,542 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json&lt;BR /&gt;2020-05-07 01:06:12,543 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json'] {'content': Template('input.config-hive.json.j2'), 'mode': 0644}&lt;BR /&gt;2020-05-07 01:06:12,543 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED] -verbose'", 'user': 'hive'}&lt;BR /&gt;&lt;BR /&gt;Command failed after 1 tries&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 06 May 2020 19:44:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-MetaStore-always-fails-to-start/m-p/295545#M217801</guid>
      <dc:creator>Udhav</dc:creator>
      <dc:date>2020-05-06T19:44:53Z</dc:date>
    </item>
    <item>
      <title>Re: Hive MetaStore always fails to start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-MetaStore-always-fails-to-start/m-p/295555#M217811</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/76902"&gt;@Udhav&lt;/a&gt;&amp;nbsp;you will need to create permissions for the user to access the table you create in the metastore.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Underlying cause: java.sql.SQLException : &lt;FONT color="#FF0000"&gt;Access denied for user 'hive'@'localhost'&lt;/FONT&gt; (using password: YES)&lt;BR /&gt;SQL Error code: 1045&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;For example mysql:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;CREATE DATABASE hive;&lt;BR /&gt;CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';&lt;BR /&gt;GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost' WITH GRANT OPTION;&lt;BR /&gt;FLUSH PRIVILEGES;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In Ambari Admin Hive Config Database tab and during Cluster Instal Wizard for Hive, there should be a Test Connection button for Hive Metastore. &amp;nbsp;Use this feature to test the connection during install. &amp;nbsp; &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also just to make sure, there is also a requirement for mysql-connector for ambari:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;To use MySQL with Hive, you must &lt;/SPAN&gt;&lt;A href="https://dev.mysql.com/downloads/connector/j/" target="_blank" rel="noopener noreferrer"&gt;download the https://dev.mysql.com/downloads/connector/j/ from MySQL&lt;/A&gt;&lt;SPAN&gt;. Once downloaded to the Ambari Server host, run: &lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;ambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/mysql/com.mysql.jdbc.Driver&lt;/STRONG&gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;A href="https://www.dfheinz.com" target="_blank" rel="noopener"&gt;Steven&amp;nbsp;@ DFHZ&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 06 May 2020 23:53:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-MetaStore-always-fails-to-start/m-p/295555#M217811</guid>
      <dc:creator>stevenmatison</dc:creator>
      <dc:date>2020-05-06T23:53:44Z</dc:date>
    </item>
    <item>
      <title>Re: Hive MetaStore always fails to start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-MetaStore-always-fails-to-start/m-p/296110#M218115</link>
      <description>&lt;P&gt;Thank You&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/60150"&gt;@stevenmatison&lt;/a&gt;&amp;nbsp;its working now.&lt;/P&gt;</description>
      <pubDate>Mon, 18 May 2020 10:31:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-MetaStore-always-fails-to-start/m-p/296110#M218115</guid>
      <dc:creator>Udhav</dc:creator>
      <dc:date>2020-05-18T10:31:35Z</dc:date>
    </item>
  </channel>
</rss>

