Support Questions
Find answers, ask questions, and share your expertise

Hive Metastore, Hive Server2 and MYSQL server stopped, Can anyone please help how to fix this?

Explorer
stderr: /var/lib/ambari-agent/data/errors-366.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in <module>
    HiveMetastore().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 524, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start
    self.configure(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure
    hive(name = 'metastore')
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 292, in hive
    user = params.hive_user
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL:	 jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver :	 com.mysql.jdbc.Driver
Metastore connection User:	 hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***
stdout: /var/lib/ambari-agent/data/output-366.txt
2016-07-24 18:49:25,210 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-24 18:49:25,211 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-24 18:49:25,213 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-24 18:49:25,294 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-24 18:49:25,295 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-24 18:49:25,370 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-24 18:49:25,370 - Ensuring that hadoop has the correct symlink structure
2016-07-24 18:49:25,371 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-24 18:49:25,689 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-24 18:49:25,689 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-24 18:49:25,690 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-24 18:49:25,761 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-24 18:49:25,762 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-24 18:49:25,805 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-24 18:49:25,805 - Ensuring that hadoop has the correct symlink structure
2016-07-24 18:49:25,806 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-24 18:49:25,809 - Group['hadoop'] {}
2016-07-24 18:49:25,811 - Group['users'] {}
2016-07-24 18:49:25,811 - Group['zeppelin'] {}
2016-07-24 18:49:25,812 - Group['knox'] {}
2016-07-24 18:49:25,812 - Group['ranger'] {}
2016-07-24 18:49:25,812 - Group['spark'] {}
2016-07-24 18:49:25,813 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,814 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,815 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,816 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,818 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,819 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,820 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,821 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-07-24 18:49:25,823 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,824 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,825 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,826 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,827 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,828 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,830 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,831 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,832 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,833 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,835 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,837 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,838 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,840 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-24 18:49:25,844 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-07-24 18:49:25,854 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-07-24 18:49:25,855 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-07-24 18:49:25,858 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-24 18:49:25,861 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-07-24 18:49:25,870 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-07-24 18:49:25,871 - Group['hdfs'] {}
2016-07-24 18:49:25,872 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-07-24 18:49:25,874 - Directory['/etc/hadoop'] {'mode': 0755}
2016-07-24 18:49:25,932 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-07-24 18:49:25,933 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-07-24 18:49:25,973 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-07-24 18:49:25,990 - Skipping Execute[('setenforce', '0')] due to not_if
2016-07-24 18:49:25,993 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-07-24 18:49:25,999 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-07-24 18:49:26,001 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-07-24 18:49:26,017 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-07-24 18:49:26,022 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-07-24 18:49:26,023 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:26,050 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-07-24 18:49:26,052 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-07-24 18:49:26,054 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-07-24 18:49:26,069 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-07-24 18:49:26,081 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-07-24 18:49:26,606 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-24 18:49:26,610 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-24 18:49:26,611 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-24 18:49:26,682 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-24 18:49:26,683 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-24 18:49:26,747 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-24 18:49:26,748 - Ensuring that hadoop has the correct symlink structure
2016-07-24 18:49:26,748 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-24 18:49:26,844 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmp3i1gJ4 2>/tmp/tmp8FUYgf''] {'quiet': False}
2016-07-24 18:49:27,025 - call returned (1, '')
2016-07-24 18:49:27,026 - Execution of 'cat /var/run/hive/hive.pid 1>/tmp/tmp3i1gJ4 2>/tmp/tmp8FUYgf' returned 1. cat: /var/run/hive/hive.pid: No such file or directory

2016-07-24 18:49:27,029 - Execute['ambari-sudo.sh kill '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1)'}
2016-07-24 18:49:27,037 - Skipping Execute['ambari-sudo.sh kill '] due to not_if
2016-07-24 18:49:27,038 - Execute['ambari-sudo.sh kill -9 '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1) )'}
2016-07-24 18:49:27,047 - Skipping Execute['ambari-sudo.sh kill -9 '] due to not_if
2016-07-24 18:49:27,048 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 3}
2016-07-24 18:49:27,059 - File['/var/run/hive/hive.pid'] {'action': ['delete']}
2016-07-24 18:49:27,065 - Directory['/etc/hive'] {'mode': 0755}
2016-07-24 18:49:27,066 - Directory['/usr/hdp/current/hive-metastore/conf'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-07-24 18:49:27,067 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-07-24 18:49:27,102 - Generating config: /usr/hdp/current/hive-metastore/conf/mapred-site.xml
2016-07-24 18:49:27,103 - File['/usr/hdp/current/hive-metastore/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-07-24 18:49:27,208 - File['/usr/hdp/current/hive-metastore/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,209 - File['/usr/hdp/current/hive-metastore/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,210 - File['/usr/hdp/current/hive-metastore/conf/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,223 - File['/usr/hdp/current/hive-metastore/conf/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,225 - Directory['/usr/hdp/current/hive-metastore/conf/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-07-24 18:49:27,225 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-07-24 18:49:27,245 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml
2016-07-24 18:49:27,245 - File['/usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-07-24 18:49:27,393 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,394 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,395 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,404 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,406 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-07-24 18:49:27,422 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2016-07-24 18:49:27,422 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-07-24 18:49:27,638 - Package['atlas-metadata*-hive-plugin'] {}
2016-07-24 18:49:37,280 - Skipping installation of existing package atlas-metadata*-hive-plugin
2016-07-24 18:49:37,282 - PropertiesFile['/usr/hdp/current/hive-metastore/conf/conf.server/client.properties'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'properties': {'atlas.http.authentication.type': 'simple', 'atlas.http.authentication.enabled': 'false'}}
2016-07-24 18:49:37,346 - Generating properties file: /usr/hdp/current/hive-metastore/conf/conf.server/client.properties
2016-07-24 18:49:37,347 - File['/usr/hdp/current/hive-metastore/conf/conf.server/client.properties'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:37,442 - Writing File['/usr/hdp/current/hive-metastore/conf/conf.server/client.properties'] because contents don't match
2016-07-24 18:49:37,461 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:37,567 - Writing File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] because contents don't match
2016-07-24 18:49:37,570 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-07-24 18:49:37,617 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-07-24 18:49:37,626 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-07-24 18:49:37,627 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-07-24 18:49:37,641 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2016-07-24 18:49:37,664 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED]'", 'user': 'hive'}
16 REPLIES 16

@Swati Gupta

Can you cross check the database password you have provided in hive configs ?

Explorer

@Sandeep Nemuri

I haven't set up any password. I have simply installed HDP and changed password for Ambari admin and root...

@Swati Gupta

Have you recently performed any activity on cluster? Like upgrade ?

Explorer

@Sagar Shimpi

No, no cluster updates. I have only added DCBP connection pool settings in HIVE... after that to fix this... I have changed permissions and tried to restart services many times.... But every time getting "MYSQL Daemon failed to start".

Now trying to install mysql server and setting password... getting cannot connect to hortonworks sandbox..

Rising Star

@Swati Gupta

Hi, from the stderr logs it seems that your Hivemetastore is not able to access the Hive database (in this case MySQL), could you please let us know, how did you try create the DB in MySQL ?

Are you able to connect to your MySQL with "Database Username" and "Database Password" you gave in Ambari in Hive configs ?

# mysql -u $username -h $hostname -p

Explorer

I didn't create DB, I am using HDP on default settings yet and I didn't set database password in Ambari in Hive configs. But now when I executed this command #mysql -u $username -h $hostname -p, neither I am able to set password nor I am able to connect to sandbox... I will share screenshot after few hours

Explorer

6042-2016-07-25-2.png

Sharing screenshot below: lost connection to MYSQL server at reading initial communication packet