Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hive metastore start up issue

hive metastore start up issue

New Contributor

Hi All,

i am using HDP 2.5.3 version. I am using postgresql for HIVE DB. Whenever i tried to restart HIVE-Metastore service. it is trying to recreate hive metastore relate tables. please help me to resolve this issue.

if i drop HIVE database in postgres db and recreate hive database. After that the hive metatstore is starting and working.

Error Message:

2017-03-15 00:04:25,491 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-03-15 00:04:25,492 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-03-15 00:04:25,492 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-03-15 00:04:25,517 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-03-15 00:04:25,518 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-03-15 00:04:25,543 - checked_call returned (0, '')
2017-03-15 00:04:25,543 - Ensuring that hadoop has the correct symlink structure
2017-03-15 00:04:25,543 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-03-15 00:04:25,690 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-03-15 00:04:25,691 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-03-15 00:04:25,691 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-03-15 00:04:25,716 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-03-15 00:04:25,717 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-03-15 00:04:25,741 - checked_call returned (0, '')
2017-03-15 00:04:25,742 - Ensuring that hadoop has the correct symlink structure
2017-03-15 00:04:25,742 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-03-15 00:04:25,743 - Group['livy'] {}
2017-03-15 00:04:25,745 - Group['spark'] {}
2017-03-15 00:04:25,745 - Group['ranger'] {}
2017-03-15 00:04:25,746 - Group['hadoop'] {}
2017-03-15 00:04:25,746 - Group['users'] {}
2017-03-15 00:04:25,747 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,752 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,753 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,753 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,754 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2017-03-15 00:04:25,755 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-03-15 00:04:25,756 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,757 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,758 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-03-15 00:04:25,759 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,760 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,761 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,762 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-03-15 00:04:25,763 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-03-15 00:04:25,765 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-03-15 00:04:25,769 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-03-15 00:04:25,769 - Group['hdfs'] {}
2017-03-15 00:04:25,770 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-03-15 00:04:25,770 - FS Type: 
2017-03-15 00:04:25,771 - Directory['/etc/hadoop'] {'mode': 0755}
2017-03-15 00:04:25,802 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2017-03-15 00:04:25,803 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-03-15 00:04:25,816 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-03-15 00:04:25,821 - Skipping Execute[('setenforce', '0')] due to not_if
2017-03-15 00:04:25,821 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-03-15 00:04:25,823 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-03-15 00:04:25,824 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-03-15 00:04:25,830 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2017-03-15 00:04:25,832 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2017-03-15 00:04:25,833 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-03-15 00:04:25,846 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-03-15 00:04:25,847 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-03-15 00:04:25,848 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-03-15 00:04:25,855 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-03-15 00:04:25,858 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-03-15 00:04:26,110 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-03-15 00:04:26,110 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-03-15 00:04:26,111 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-03-15 00:04:26,141 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-03-15 00:04:26,142 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-03-15 00:04:26,167 - checked_call returned (0, '')
2017-03-15 00:04:26,168 - Ensuring that hadoop has the correct symlink structure
2017-03-15 00:04:26,168 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-03-15 00:04:26,175 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2017-03-15 00:04:26,201 - call returned (0, 'hive-server2 - 2.5.3.0-37')
2017-03-15 00:04:26,202 - Stack Feature Version Info: stack_version=2.5, version=2.5.3.0-37, current_cluster_version=2.5.3.0-37 -> 2.5.3.0-37
2017-03-15 00:04:26,213 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmpUVqODm 2>/tmp/tmpMq0UXv''] {'quiet': False}
2017-03-15 00:04:26,248 - call returned (1, '')
2017-03-15 00:04:26,249 - Execution of 'cat /var/run/hive/hive.pid 1>/tmp/tmpUVqODm 2>/tmp/tmpMq0UXv' returned 1. cat: /var/run/hive/hive.pid: No such file or directory

2017-03-15 00:04:26,250 - Execute['ambari-sudo.sh kill '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1)'}
2017-03-15 00:04:26,253 - Skipping Execute['ambari-sudo.sh kill '] due to not_if
2017-03-15 00:04:26,254 - Execute['ambari-sudo.sh kill -9 '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1) )', 'ignore_failures': True}
2017-03-15 00:04:26,257 - Skipping Execute['ambari-sudo.sh kill -9 '] due to not_if
2017-03-15 00:04:26,257 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 3}
2017-03-15 00:04:26,263 - File['/var/run/hive/hive.pid'] {'action': ['delete']}
2017-03-15 00:04:26,263 - Pid file /var/run/hive/hive.pid is empty or does not exist
2017-03-15 00:04:26,267 - Directory['/etc/hive'] {'mode': 0755}
2017-03-15 00:04:26,268 - Directories to fill with configs: ['/usr/hdp/current/hive-metastore/conf', '/usr/hdp/current/hive-metastore/conf/conf.server']
2017-03-15 00:04:26,268 - Directory['/usr/hdp/current/hive-metastore/conf'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True}
2017-03-15 00:04:26,268 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2017-03-15 00:04:26,281 - Generating config: /usr/hdp/current/hive-metastore/conf/mapred-site.xml
2017-03-15 00:04:26,282 - File['/usr/hdp/current/hive-metastore/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-03-15 00:04:26,333 - File['/usr/hdp/current/hive-metastore/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2017-03-15 00:04:26,333 - File['/usr/hdp/current/hive-metastore/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2017-03-15 00:04:26,334 - File['/usr/hdp/current/hive-metastore/conf/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2017-03-15 00:04:26,334 - File['/usr/hdp/current/hive-metastore/conf/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2017-03-15 00:04:26,335 - Directory['/usr/hdp/current/hive-metastore/conf/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True}
2017-03-15 00:04:26,335 - Changing owner for /usr/hdp/current/hive-metastore/conf/conf.server from 517 to hive
2017-03-15 00:04:26,335 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2017-03-15 00:04:26,345 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml
2017-03-15 00:04:26,345 - File['/usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-03-15 00:04:26,393 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2017-03-15 00:04:26,394 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2017-03-15 00:04:26,394 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2017-03-15 00:04:26,395 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2017-03-15 00:04:26,395 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {'hidden': {'javax.jdo.option.ConnectionPassword': 'HIVE_CLIENT,WEBHCAT_SERVER,HCAT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...}
2017-03-15 00:04:26,405 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2017-03-15 00:04:26,405 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-03-15 00:04:26,558 - XmlConfig['hivemetastore-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': {'hive.service.metrics.hadoop2.component': 'hivemetastore', 'hive.metastore.metrics.enabled': 'true', 'hive.service.metrics.file.location': '/var/log/hive/hivemetastore-report.json', 'hive.service.metrics.reporter': 'JSON_FILE, JMX, HADOOP2'}}
2017-03-15 00:04:26,568 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hivemetastore-site.xml
2017-03-15 00:04:26,568 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hivemetastore-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-03-15 00:04:26,577 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop'}
2017-03-15 00:04:26,577 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-03-15 00:04:26,580 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-03-15 00:04:26,581 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://hdp25-iaas-dev01-control01.dev.na1.phsdp.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2017-03-15 00:04:26,582 - Not downloading the file from http://hdp25-iaas-dev01-control01.dev.na1.phsdp.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2017-03-15 00:04:26,586 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hadoop-metrics2-hivemetastore.properties'] {'content': Template('hadoop-metrics2-hivemetastore.properties.j2'), 'owner': 'hive', 'group': 'hadoop'}
2017-03-15 00:04:26,587 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2017-03-15 00:04:26,588 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType postgres -userName hive -passWord [PROTECTED] -verbose'] {'not_if': u"ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -info -dbType postgres -userName hive -passWord [PROTECTED] -verbose'", 'user': 'hive'}

Command failed after 1 tries
3 REPLIES 3
Highlighted

Re: hive metastore start up issue

@sankaranarayanan ramakrishnan

Could you please provide the exact error message you are seeing in the logs? I dont see any error in the log you pasted.

Re: hive metastore start up issue

New Contributor

Thanks we found the issue, IN 2.5 HIVE Schema version is higher than Spark 1.6 Schema version. that's why it is corrupt when we try to access hive meatstore through Spark application via local mode meatstore access.

Re: hive metastore start up issue

@Ramakrishnan Sankaranarayanan

Mark the question as answered to close the thread.