Support Questions
Find answers, ask questions, and share your expertise

Hive Metastore, Hive Server2 and MYSQL server stopped, Can anyone please help how to fix this?

Explorer
stderr: /var/lib/ambari-agent/data/errors-366.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in <module>
    HiveMetastore().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 524, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start
    self.configure(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure
    hive(name = 'metastore')
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 292, in hive
    user = params.hive_user
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL:	 jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver :	 com.mysql.jdbc.Driver
Metastore connection User:	 hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***
stdout: /var/lib/ambari-agent/data/output-366.txt
2016-07-24 18:49:25,210 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-24 18:49:25,211 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-24 18:49:25,213 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-24 18:49:25,294 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-24 18:49:25,295 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-24 18:49:25,370 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-24 18:49:25,370 - Ensuring that hadoop has the correct symlink structure
2016-07-24 18:49:25,371 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-24 18:49:25,689 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-24 18:49:25,689 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-24 18:49:25,690 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-24 18:49:25,761 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-24 18:49:25,762 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-24 18:49:25,805 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-24 18:49:25,805 - Ensuring that hadoop has the correct symlink structure
2016-07-24 18:49:25,806 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-24 18:49:25,809 - Group['hadoop'] {}
2016-07-24 18:49:25,811 - Group['users'] {}
2016-07-24 18:49:25,811 - Group['zeppelin'] {}
2016-07-24 18:49:25,812 - Group['knox'] {}
2016-07-24 18:49:25,812 - Group['ranger'] {}
2016-07-24 18:49:25,812 - Group['spark'] {}
2016-07-24 18:49:25,813 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,814 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,815 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,816 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,818 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,819 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,820 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,821 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-07-24 18:49:25,823 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,824 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,825 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,826 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,827 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,828 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,830 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,831 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-24 18:49:25,832 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,833 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,835 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,837 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,838 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-24 18:49:25,840 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-24 18:49:25,844 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-07-24 18:49:25,854 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-07-24 18:49:25,855 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-07-24 18:49:25,858 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-24 18:49:25,861 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-07-24 18:49:25,870 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-07-24 18:49:25,871 - Group['hdfs'] {}
2016-07-24 18:49:25,872 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-07-24 18:49:25,874 - Directory['/etc/hadoop'] {'mode': 0755}
2016-07-24 18:49:25,932 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-07-24 18:49:25,933 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-07-24 18:49:25,973 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-07-24 18:49:25,990 - Skipping Execute[('setenforce', '0')] due to not_if
2016-07-24 18:49:25,993 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-07-24 18:49:25,999 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-07-24 18:49:26,001 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-07-24 18:49:26,017 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-07-24 18:49:26,022 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-07-24 18:49:26,023 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:26,050 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-07-24 18:49:26,052 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-07-24 18:49:26,054 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-07-24 18:49:26,069 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-07-24 18:49:26,081 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-07-24 18:49:26,606 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-24 18:49:26,610 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-24 18:49:26,611 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-24 18:49:26,682 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-24 18:49:26,683 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-24 18:49:26,747 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-24 18:49:26,748 - Ensuring that hadoop has the correct symlink structure
2016-07-24 18:49:26,748 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-24 18:49:26,844 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmp3i1gJ4 2>/tmp/tmp8FUYgf''] {'quiet': False}
2016-07-24 18:49:27,025 - call returned (1, '')
2016-07-24 18:49:27,026 - Execution of 'cat /var/run/hive/hive.pid 1>/tmp/tmp3i1gJ4 2>/tmp/tmp8FUYgf' returned 1. cat: /var/run/hive/hive.pid: No such file or directory

2016-07-24 18:49:27,029 - Execute['ambari-sudo.sh kill '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1)'}
2016-07-24 18:49:27,037 - Skipping Execute['ambari-sudo.sh kill '] due to not_if
2016-07-24 18:49:27,038 - Execute['ambari-sudo.sh kill -9 '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1) )'}
2016-07-24 18:49:27,047 - Skipping Execute['ambari-sudo.sh kill -9 '] due to not_if
2016-07-24 18:49:27,048 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 3}
2016-07-24 18:49:27,059 - File['/var/run/hive/hive.pid'] {'action': ['delete']}
2016-07-24 18:49:27,065 - Directory['/etc/hive'] {'mode': 0755}
2016-07-24 18:49:27,066 - Directory['/usr/hdp/current/hive-metastore/conf'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-07-24 18:49:27,067 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-07-24 18:49:27,102 - Generating config: /usr/hdp/current/hive-metastore/conf/mapred-site.xml
2016-07-24 18:49:27,103 - File['/usr/hdp/current/hive-metastore/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-07-24 18:49:27,208 - File['/usr/hdp/current/hive-metastore/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,209 - File['/usr/hdp/current/hive-metastore/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,210 - File['/usr/hdp/current/hive-metastore/conf/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,223 - File['/usr/hdp/current/hive-metastore/conf/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,225 - Directory['/usr/hdp/current/hive-metastore/conf/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-07-24 18:49:27,225 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-07-24 18:49:27,245 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml
2016-07-24 18:49:27,245 - File['/usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-07-24 18:49:27,393 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,394 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:27,395 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,404 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:27,406 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-07-24 18:49:27,422 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2016-07-24 18:49:27,422 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-07-24 18:49:27,638 - Package['atlas-metadata*-hive-plugin'] {}
2016-07-24 18:49:37,280 - Skipping installation of existing package atlas-metadata*-hive-plugin
2016-07-24 18:49:37,282 - PropertiesFile['/usr/hdp/current/hive-metastore/conf/conf.server/client.properties'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'properties': {'atlas.http.authentication.type': 'simple', 'atlas.http.authentication.enabled': 'false'}}
2016-07-24 18:49:37,346 - Generating properties file: /usr/hdp/current/hive-metastore/conf/conf.server/client.properties
2016-07-24 18:49:37,347 - File['/usr/hdp/current/hive-metastore/conf/conf.server/client.properties'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
2016-07-24 18:49:37,442 - Writing File['/usr/hdp/current/hive-metastore/conf/conf.server/client.properties'] because contents don't match
2016-07-24 18:49:37,461 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop'}
2016-07-24 18:49:37,567 - Writing File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] because contents don't match
2016-07-24 18:49:37,570 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-07-24 18:49:37,617 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-07-24 18:49:37,626 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-07-24 18:49:37,627 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-07-24 18:49:37,641 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2016-07-24 18:49:37,664 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED]'", 'user': 'hive'}
16 REPLIES 16

Explorer

Rising Star

@Swati Gupta

Can you check if your mysql server is up and running, also are you able to connect using hive user.

# service mysqld status

# mysql -u hive -p

Enter password:hive

Explorer

@krajguru

Mysqld status - stopped

While connecting to hive user : Cannot connect to local MySQL server thriugh socket "/var/lib/mysql/mysql.sock"

Super Guru

@Swati Gupta - Can you please check /var/log/mysqld.log for errors? Please post error trace here if you need any help.

Super Guru

@Swati Gupta

Can you please try below steps:

  1. # cd /var/lib/mysql
  2. # mkdir bak
  3. # mv ibdata1 bak/.
  4. # mv ib_logfile* bak/.
  5. # cp -a bak/ibdata1 ibdata1
  6. # cp -a bak/ib_logfile* .
  7. # service mysql restart

Reference - http://notesonit.blogspot.hk/2013/05/innodb-unable-to-lock-ibdata1-error-11.html

Explorer

@Kuldeep Kulkarni

I have tried the steps mentioned above. All commands went wine when I exceuted last command I got unrecognized service.

When I executed # service mysqld restart , I got MySQL daemon failed to start

Explorer

@Benjamin Leonhardi @Kuldeep Kulkarni

HIVE services are not working : HIVE Metastore, HIVE server2 , MYSQL Server not working...

What all I have tried so far:

#service mysqld status -- mysqld stopped

#service mysqld restart -- MySQL Deamon failed to restart

# stat -c "%a %n" /var/lib/mysq/ -- 755 permission

# /var/run/mysqld/mysqld.pid -- doesn't exist

# /var/lib/mysql/mysql.sock -- doesn't exist

# chown -R mysql /var/lib/mysql

# chgrp -R mysql /var/lib/mysql

#chmod a+r /var/run/mysqld/mysqld.sock - No such file or directory

# /usr/bin/mysqld_safe & -- leave blank screen after that tried

# /usr/bin/mysqladmin -u root -h sandbox.hortonworks.com password "password" -- error-- connect to server failed "system error 111"

# $HIVE_HOME/bin/schematool -initSchema -dbtype MySQL -- $HIVE_HOME not found

actual path of hive - /usr/hdp/2.4.0.0-169

#yum install hive-hcatalog --- error - cannot access mirror files -- attached screenshot

I have changed DB password thru Ambari and added "datanucleus.connectionPoolingType = dbcp" in custom hive-site.

Attached are the screen shot of terminal errors and ambari logs are attached above showing schema tool error.

It is required to drop DB or uninstall HIVE? Do I have to reinstall hortonworks sandbox?

Please share suggestions to fix this issue. I am stuck here from past 3 days... Thank you so much

6076-2016-07-25-1.png

6077-2016-07-25-2.png

6078-2016-07-25-3.png