<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Unable to start hive metastore in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167336#M129668</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm trying to install and run Hive on my test environment.&lt;/P&gt;&lt;P&gt;During Hive startup I get the message, that Hive Metastore startup has failed.&lt;/P&gt;&lt;P&gt;Here's an output:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;stderr:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in &amp;lt;module&amp;gt;
    HiveMetastore().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 60, in start
    hive_service('metastore', action='start', upgrade_type=upgrade_type)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py", line 68, in hive_service
    pid = get_user_call_output.get_user_call_output(format("cat {pid_file}"), user=params.hive_user, is_checked_call=False)[1]
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 58, in get_user_call_output
    err_msg = Logger.filter_text(("Execution of '%s' returned %d. %s") % (command_string, code, all_output))
  File "/usr/lib/python2.6/site-packages/resource_management/core/logger.py", line 101, in filter_text
    text = text.replace(unprotected_string, protected_string)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 117: ordinal not in range(128)
&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;stdout:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;2016-05-25 14:58:18,934 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:18,935 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:18,935 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:18,955 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:18,956 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:18,976 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -&amp;gt; /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:18,976 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:18,976 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,065 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,065 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,065 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,085 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,086 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,105 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -&amp;gt; /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,105 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,105 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,107 - Group['spark'] {}
2016-05-25 14:58:19,108 - Group['hadoop'] {}
2016-05-25 14:58:19,108 - Group['users'] {}
2016-05-25 14:58:19,108 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,110 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,110 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,111 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,111 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,112 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,112 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,117 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-05-25 14:58:19,122 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-05-25 14:58:19,123 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-05-25 14:58:19,123 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,124 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-05-25 14:58:19,130 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-05-25 14:58:19,130 - Group['hdfs'] {}
2016-05-25 14:58:19,130 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-05-25 14:58:19,131 - FS Type: 
2016-05-25 14:58:19,131 - Directory['/etc/hadoop'] {'mode': 0755}
2016-05-25 14:58:19,143 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,144 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-05-25 14:58:19,155 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce &amp;amp;&amp;amp; getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-05-25 14:58:19,162 - Skipping Execute[('setenforce', '0')] due to not_if
2016-05-25 14:58:19,163 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,164 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,165 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,168 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,170 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,171 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,182 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,183 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-05-25 14:58:19,187 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-05-25 14:58:19,193 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-05-25 14:58:19,355 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,355 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,356 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,377 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,377 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,397 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -&amp;gt; /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,397 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,397 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,436 - Directory['/etc/hive'] {'mode': 0755}
2016-05-25 14:58:19,437 - Directory['/usr/hdp/current/hive-metastore/conf'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,438 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,452 - Generating config: /usr/hdp/current/hive-metastore/conf/mapred-site.xml
2016-05-25 14:58:19,452 - File['/usr/hdp/current/hive-metastore/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - File['/usr/hdp/current/hive-metastore/conf/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - Directory['/usr/hdp/current/hive-metastore/conf/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,486 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,494 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml
2016-05-25 14:58:19,494 - File['/usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,527 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,534 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2016-05-25 14:58:19,534 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,638 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,638 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-05-25 14:58:19,641 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-05-25 14:58:19,642 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-05-25 14:58:19,642 - Not downloading the file from &lt;A href="http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar" target="_blank"&gt;http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar&lt;/A&gt;, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-05-25 14:58:19,642 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2016-05-25 14:58:19,644 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED]'", 'user': 'hive'}
2016-05-25 14:58:23,524 - Skipping Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] due to not_if
2016-05-25 14:58:23,525 - Directory['/var/run/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,525 - Directory['/var/log/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,526 - Directory['/var/lib/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,527 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1&amp;gt;/tmp/tmppLcf6w 2&amp;gt;/tmp/tmpw1ccfI''] {'quiet': False}
2016-05-25 14:58:23,565 - call returned (1, '')
&lt;/PRE&gt;&lt;P&gt;I'm using HDP-2.4.0.0-169 stack with Ambari-2.2.2.0&lt;/P&gt;</description>
    <pubDate>Wed, 25 May 2016 19:15:23 GMT</pubDate>
    <dc:creator>dzianis_frydlia</dc:creator>
    <dc:date>2016-05-25T19:15:23Z</dc:date>
    <item>
      <title>Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167336#M129668</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm trying to install and run Hive on my test environment.&lt;/P&gt;&lt;P&gt;During Hive startup I get the message, that Hive Metastore startup has failed.&lt;/P&gt;&lt;P&gt;Here's an output:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;stderr:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in &amp;lt;module&amp;gt;
    HiveMetastore().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 60, in start
    hive_service('metastore', action='start', upgrade_type=upgrade_type)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py", line 68, in hive_service
    pid = get_user_call_output.get_user_call_output(format("cat {pid_file}"), user=params.hive_user, is_checked_call=False)[1]
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 58, in get_user_call_output
    err_msg = Logger.filter_text(("Execution of '%s' returned %d. %s") % (command_string, code, all_output))
  File "/usr/lib/python2.6/site-packages/resource_management/core/logger.py", line 101, in filter_text
    text = text.replace(unprotected_string, protected_string)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 117: ordinal not in range(128)
&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;stdout:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;2016-05-25 14:58:18,934 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:18,935 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:18,935 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:18,955 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:18,956 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:18,976 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -&amp;gt; /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:18,976 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:18,976 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,065 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,065 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,065 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,085 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,086 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,105 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -&amp;gt; /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,105 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,105 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,107 - Group['spark'] {}
2016-05-25 14:58:19,108 - Group['hadoop'] {}
2016-05-25 14:58:19,108 - Group['users'] {}
2016-05-25 14:58:19,108 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,110 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,110 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,111 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,111 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,112 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,112 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,117 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-05-25 14:58:19,122 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-05-25 14:58:19,123 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-05-25 14:58:19,123 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,124 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-05-25 14:58:19,130 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-05-25 14:58:19,130 - Group['hdfs'] {}
2016-05-25 14:58:19,130 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-05-25 14:58:19,131 - FS Type: 
2016-05-25 14:58:19,131 - Directory['/etc/hadoop'] {'mode': 0755}
2016-05-25 14:58:19,143 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,144 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-05-25 14:58:19,155 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce &amp;amp;&amp;amp; getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-05-25 14:58:19,162 - Skipping Execute[('setenforce', '0')] due to not_if
2016-05-25 14:58:19,163 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,164 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,165 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,168 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,170 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,171 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,182 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,183 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-05-25 14:58:19,187 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-05-25 14:58:19,193 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-05-25 14:58:19,355 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,355 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,356 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,377 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,377 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,397 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -&amp;gt; /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,397 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,397 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,436 - Directory['/etc/hive'] {'mode': 0755}
2016-05-25 14:58:19,437 - Directory['/usr/hdp/current/hive-metastore/conf'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,438 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,452 - Generating config: /usr/hdp/current/hive-metastore/conf/mapred-site.xml
2016-05-25 14:58:19,452 - File['/usr/hdp/current/hive-metastore/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - File['/usr/hdp/current/hive-metastore/conf/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - Directory['/usr/hdp/current/hive-metastore/conf/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,486 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,494 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml
2016-05-25 14:58:19,494 - File['/usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,527 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,534 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2016-05-25 14:58:19,534 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,638 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,638 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-05-25 14:58:19,641 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-05-25 14:58:19,642 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-05-25 14:58:19,642 - Not downloading the file from &lt;A href="http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar" target="_blank"&gt;http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar&lt;/A&gt;, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-05-25 14:58:19,642 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2016-05-25 14:58:19,644 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED]'", 'user': 'hive'}
2016-05-25 14:58:23,524 - Skipping Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] due to not_if
2016-05-25 14:58:23,525 - Directory['/var/run/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,525 - Directory['/var/log/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,526 - Directory['/var/lib/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,527 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1&amp;gt;/tmp/tmppLcf6w 2&amp;gt;/tmp/tmpw1ccfI''] {'quiet': False}
2016-05-25 14:58:23,565 - call returned (1, '')
&lt;/PRE&gt;&lt;P&gt;I'm using HDP-2.4.0.0-169 stack with Ambari-2.2.2.0&lt;/P&gt;</description>
      <pubDate>Wed, 25 May 2016 19:15:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167336#M129668</guid>
      <dc:creator>dzianis_frydlia</dc:creator>
      <dc:date>2016-05-25T19:15:23Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167337#M129669</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/2805/dzianisfrydliand.html" nodeid="2805"&gt;@Dennis Fridlyand&lt;/A&gt;&lt;P&gt;I have resolved this issue by following below link.&lt;/P&gt;&lt;P&gt;&lt;A href="http://stackoverflow.com/questions/21129020/how-to-fix-unicodedecodeerror-ascii-codec-cant-decode-byte" target="_blank"&gt;http://stackoverflow.com/questions/21129020/how-to-fix-unicodedecodeerror-ascii-codec-cant-decode-byte&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 25 May 2016 19:21:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167337#M129669</guid>
      <dc:creator>jyadav</dc:creator>
      <dc:date>2016-05-25T19:21:47Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167338#M129670</link>
      <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/2805/dzianisfrydliand.html" nodeid="2805"&gt;@Dennis Fridlyand&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;This issue is related to Jira AMBARI-14823.&lt;/P&gt;&lt;P&gt;&lt;A href="https://issues.apache.org/jira/browse/AMBARI-14823" target="_blank"&gt;https://issues.apache.org/jira/browse/AMBARI-14823&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Please change the locale and LANG of operating system to utf-8 and see if it helps.&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;Sindhu&lt;/P&gt;</description>
      <pubDate>Wed, 25 May 2016 19:22:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167338#M129670</guid>
      <dc:creator>ssubhas</dc:creator>
      <dc:date>2016-05-25T19:22:11Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167339#M129671</link>
      <description>&lt;P&gt;what ambari version you running with, there is jira(https://issues.apache.org/jira/browse/AMBARI-14823) exist for the 2.2.0 &lt;/P&gt;</description>
      <pubDate>Wed, 25 May 2016 19:24:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167339#M129671</guid>
      <dc:creator>rajkumar_singh</dc:creator>
      <dc:date>2016-05-25T19:24:31Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167340#M129672</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/5019/ssubhas.html" nodeid="5019"&gt;@Sindhu&lt;/A&gt;, here's what env returns:&lt;/P&gt;&lt;PRE&gt;[root@h1 hive]# env | grep LAN
LANG=ru_RU.UTF-8
&lt;/PRE&gt;</description>
      <pubDate>Wed, 25 May 2016 19:29:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167340#M129672</guid>
      <dc:creator>dzianis_frydlia</dc:creator>
      <dc:date>2016-05-25T19:29:58Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167341#M129673</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2805/dzianisfrydliand.html" nodeid="2805"&gt;@Dennis&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Can you try LANG=en_US.UTF-8?&lt;/P&gt;&lt;P&gt;Seems like Russian is being picked up. &lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;Sindhu&lt;/P&gt;</description>
      <pubDate>Wed, 25 May 2016 19:35:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167341#M129673</guid>
      <dc:creator>ssubhas</dc:creator>
      <dc:date>2016-05-25T19:35:21Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167342#M129674</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/5019/ssubhas.html" nodeid="5019"&gt;@Sindhu&lt;/A&gt;, thanks, it helped!&lt;/P&gt;</description>
      <pubDate>Wed, 25 May 2016 19:42:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/167342#M129674</guid>
      <dc:creator>dzianis_frydlia</dc:creator>
      <dc:date>2016-05-25T19:42:10Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to start hive metastore</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/290095#M214666</link>
      <description>&lt;P&gt;Writing this so that it can help someone in future:&lt;/P&gt;&lt;P&gt;I was installing Hive and getting error that It hive metastore wasn't able to connect, and I successfully resolved the error by recreating the hive metastore database.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Someone the user which was created in mysql Hive metastore wasn't working properly and not able to authenticate. So I dropped metastore DB, Dropped User. Recreated Metastore DB, Recreated User, Granted all privileges and then it was working without issues.&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2020 17:20:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-start-hive-metastore/m-p/290095#M214666</guid>
      <dc:creator>Vj1989</dc:creator>
      <dc:date>2020-02-19T17:20:07Z</dc:date>
    </item>
  </channel>
</rss>

