Support Questions

Find answers, ask questions, and share your expertise

Unable to start hive metastore

Hi,

I'm trying to install and run Hive on my test environment.

During Hive startup I get the message, that Hive Metastore startup has failed.

Here's an output:

stderr:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in <module>
    HiveMetastore().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 60, in start
    hive_service('metastore', action='start', upgrade_type=upgrade_type)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py", line 68, in hive_service
    pid = get_user_call_output.get_user_call_output(format("cat {pid_file}"), user=params.hive_user, is_checked_call=False)[1]
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 58, in get_user_call_output
    err_msg = Logger.filter_text(("Execution of '%s' returned %d. %s") % (command_string, code, all_output))
  File "/usr/lib/python2.6/site-packages/resource_management/core/logger.py", line 101, in filter_text
    text = text.replace(unprotected_string, protected_string)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 117: ordinal not in range(128)

stdout:

2016-05-25 14:58:18,934 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:18,935 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:18,935 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:18,955 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:18,956 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:18,976 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:18,976 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:18,976 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,065 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,065 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,065 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,085 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,086 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,105 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,105 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,105 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,107 - Group['spark'] {}
2016-05-25 14:58:19,108 - Group['hadoop'] {}
2016-05-25 14:58:19,108 - Group['users'] {}
2016-05-25 14:58:19,108 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,110 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,110 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,111 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,111 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,112 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,112 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,117 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-05-25 14:58:19,122 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-05-25 14:58:19,123 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-05-25 14:58:19,123 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,124 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-05-25 14:58:19,130 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-05-25 14:58:19,130 - Group['hdfs'] {}
2016-05-25 14:58:19,130 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-05-25 14:58:19,131 - FS Type: 
2016-05-25 14:58:19,131 - Directory['/etc/hadoop'] {'mode': 0755}
2016-05-25 14:58:19,143 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,144 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-05-25 14:58:19,155 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-05-25 14:58:19,162 - Skipping Execute[('setenforce', '0')] due to not_if
2016-05-25 14:58:19,163 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,164 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,165 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,168 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,170 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,171 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,182 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,183 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-05-25 14:58:19,187 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-05-25 14:58:19,193 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-05-25 14:58:19,355 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,355 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,356 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,377 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,377 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,397 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,397 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,397 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,436 - Directory['/etc/hive'] {'mode': 0755}
2016-05-25 14:58:19,437 - Directory['/usr/hdp/current/hive-metastore/conf'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,438 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,452 - Generating config: /usr/hdp/current/hive-metastore/conf/mapred-site.xml
2016-05-25 14:58:19,452 - File['/usr/hdp/current/hive-metastore/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - File['/usr/hdp/current/hive-metastore/conf/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - Directory['/usr/hdp/current/hive-metastore/conf/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,486 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,494 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml
2016-05-25 14:58:19,494 - File['/usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,527 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,534 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2016-05-25 14:58:19,534 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,638 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,638 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-05-25 14:58:19,641 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-05-25 14:58:19,642 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-05-25 14:58:19,642 - Not downloading the file from http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-05-25 14:58:19,642 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2016-05-25 14:58:19,644 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED]'", 'user': 'hive'}
2016-05-25 14:58:23,524 - Skipping Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] due to not_if
2016-05-25 14:58:23,525 - Directory['/var/run/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,525 - Directory['/var/log/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,526 - Directory['/var/lib/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,527 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmppLcf6w 2>/tmp/tmpw1ccfI''] {'quiet': False}
2016-05-25 14:58:23,565 - call returned (1, '')

I'm using HDP-2.4.0.0-169 stack with Ambari-2.2.2.0

1 ACCEPTED SOLUTION

Hi @Dennis Fridlyand,

This issue is related to Jira AMBARI-14823.

https://issues.apache.org/jira/browse/AMBARI-14823

Please change the locale and LANG of operating system to utf-8 and see if it helps.

Thanks and Regards,

Sindhu

View solution in original post

7 REPLIES 7

Hi @Dennis Fridlyand,

This issue is related to Jira AMBARI-14823.

https://issues.apache.org/jira/browse/AMBARI-14823

Please change the locale and LANG of operating system to utf-8 and see if it helps.

Thanks and Regards,

Sindhu

@Sindhu, here's what env returns:

[root@h1 hive]# env | grep LAN
LANG=ru_RU.UTF-8

@Dennis

Can you try LANG=en_US.UTF-8?

Seems like Russian is being picked up.

Thanks and Regards,

Sindhu

@Sindhu, thanks, it helped!

what ambari version you running with, there is jira(https://issues.apache.org/jira/browse/AMBARI-14823) exist for the 2.2.0

Explorer

Writing this so that it can help someone in future:

I was installing Hive and getting error that It hive metastore wasn't able to connect, and I successfully resolved the error by recreating the hive metastore database.

 

Someone the user which was created in mysql Hive metastore wasn't working properly and not able to authenticate. So I dropped metastore DB, Dropped User. Recreated Metastore DB, Recreated User, Granted all privileges and then it was working without issues.