Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

MySQL Server is installed but cannot be started

avatar
Contributor

I installed Hive with MySQL Server. The MySQL Server was installed successfully. However, I cannot start this service because of the following errors:

stderr: /var/lib/ambari-agent/data/errors-148.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py", line 64, in <module>
    MysqlServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py", line 49, in start
    mysql_service(daemon_name=params.daemon_name, action='start')
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_service.py", line 45, in mysql_service
    sudo = True,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'service mysqld start' returned 1. Redirecting to /bin/systemctl start mysqld.service
Job for mysqld.service failed because a timeout was exceeded. See "systemctl status mysqld.service" and "journalctl -xe" for details.

stdout: /var/lib/ambari-agent/data/output-148.txt

2018-04-16 12:39:28,697 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-04-16 12:39:28,714 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-04-16 12:39:28,878 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-04-16 12:39:28,884 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-04-16 12:39:28,885 - Group['hdfs'] {}
2018-04-16 12:39:28,886 - Group['hadoop'] {}
2018-04-16 12:39:28,886 - Group['users'] {}
2018-04-16 12:39:28,887 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-16 12:39:28,888 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-16 12:39:28,888 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-16 12:39:28,889 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-04-16 12:39:28,890 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-04-16 12:39:28,891 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-04-16 12:39:28,892 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-16 12:39:28,892 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-16 12:39:28,893 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-16 12:39:28,894 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-04-16 12:39:28,895 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-04-16 12:39:28,901 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-04-16 12:39:28,902 - Group['hdfs'] {}
2018-04-16 12:39:28,902 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-04-16 12:39:28,902 - FS Type: 
2018-04-16 12:39:28,903 - Directory['/etc/hadoop'] {'mode': 0755}
2018-04-16 12:39:28,918 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-04-16 12:39:28,919 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-04-16 12:39:28,935 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-04-16 12:39:28,942 - Skipping Execute[('setenforce', '0')] due to not_if
2018-04-16 12:39:28,942 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-04-16 12:39:28,945 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-04-16 12:39:28,945 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-04-16 12:39:28,949 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-04-16 12:39:28,951 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-04-16 12:39:28,957 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-04-16 12:39:28,967 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-04-16 12:39:28,968 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-04-16 12:39:28,969 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-04-16 12:39:28,973 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-04-16 12:39:28,978 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-04-16 12:39:28,991 - Skipping stack-select on MYSQL_SERVER because it does not exist in the stack-select package structure.
2018-04-16 12:39:29,176 - MariaDB RedHat Support: false
2018-04-16 12:39:29,181 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-04-16 12:39:29,195 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2018-04-16 12:39:29,219 - call returned (0, 'hive-server2 - 2.6.4.0-91')
2018-04-16 12:39:29,220 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-04-16 12:39:29,255 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://eureambarimaster1.local.eurecat.org:8080/resources/CredentialUtil.jar'), 'mode': 0755}
2018-04-16 12:39:29,257 - Not downloading the file from http://eureambarimaster1.local.eurecat.org:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists
2018-04-16 12:39:29,257 - checked_call[('/usr/lib/jvm/java-1.8.0-openjdk/bin/java', '-cp', u'/var/lib/ambari-agent/cred/lib/*', 'org.apache.ambari.server.credentialapi.CredentialUtil', 'get', 'javax.jdo.option.ConnectionPassword', '-provider', u'jceks://file/var/lib/ambari-agent/cred/conf/mysql_server/hive-site.jceks')] {}
2018-04-16 12:39:29,939 - checked_call returned (0, 'SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".\nSLF4J: Defaulting to no-operation (NOP) logger implementation\nSLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.\nApr 16, 2018 12:39:29 PM org.apache.hadoop.util.NativeCodeLoader <clinit>\nWARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\nEurecat@123')
2018-04-16 12:39:29,948 - Execute[('service', 'mysqld', 'start')] {'logoutput': True, 'not_if': "pgrep -l '^mysqld", 'sudo': True}
Redirecting to /bin/systemctl start mysqld.service
Job for mysqld.service failed because a timeout was exceeded. See "systemctl status mysqld.service" and "journalctl -xe" for details.
2018-04-16 12:49:30,339 - Skipping stack-select on MYSQL_SERVER because it does not exist in the stack-select package structure.
 

Command failed after 1 tries

The logs say:

180416 12:28:20 mysqld_safe Logging to '/var/log/mysqld.log'. 180416 12:28:20 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 2018-04-16 12:28:20 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2018-04-16 12:28:20 0 [Note] /usr/sbin/mysqld (mysqld 5.6.39) starting as process 18545 ... 2018-04-16 12:28:20 18545 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000) 2018-04-16 12:28:20 18545 [Warning] Buffered warning: Changed limits: table_open_cache: 431 (requested 2000) 2018-04-16 12:28:20 18545 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld: Unknown storage engine 'InnoDB' 2018-04-16 12:28:20 18545

[ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.

If I run "mysql_update", I get this:

# mysql_upgrade

Looking for 'mysql' as: mysql

Looking for 'mysqlcheck' as: mysqlcheck

Error: Failed while fetching Server version!

Could be due to unauthorized access. FATAL ERROR: Upgrade failed

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Liana Napalkova

You are trying to add have service without a database to host the catalog.

Can you access mysql database as root? if so proceed with the below steps if not execute /usr/bin/mysql_secure_installation and provide the inputs for the prompts

Here I am assuming the root password is welcome1 and the hive password id hivepwd

mysql -u root -pwelcome1 
CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hivepwd'; 
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost'; 
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%'; 
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'FQDN_MYSQL_HOST'; 
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'localhost' IDENTIFIED BY 'hivepwd' WITH GRANT OPTION; 
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'FQDN_MYSQL_HOST' IDENTIFIED BY 'hive' WITH GRANT OPTION; 
FLUSH PRIVILEGES; quit; 

# The as hive user created above create the Hive database.

mysql -u hive -phivepwd 
CREATE DATABASE hive; 
quit; 

Now run the Ambari hive setup with the above credentials



View solution in original post

7 REPLIES 7

avatar
Master Mentor

@Liana Napalkova

Did you install MySQL manually? When you install HDP mysql is not automatically installed. You will need torun

yum install -y mysql-server

Then also the connector

# yum install -y mysql-connector-java* 

Remember to make MySQL autostart on boot with

# chkconfig mysqld on

The login to MySQL and create the hive database and user

avatar
Contributor

What I did manually is the following:

  1. sudo su -
  2. wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
  3. rpm -ivh mysql-community-release-el7-5.noarch.rpm

Then I added Hive service using Ambari UI. According to the UI, MySQL Server was successfully installed as one of the steps of Hive installation process. If I would not do the above-mentioned steps, the installation of MySQL server would fail with the message "resource_management.core.exceptions.ExecutionFailed:Execution of '/usr/bin/yum -d 0 -e 0 -y install mysql-community-release' returned 1.Error:Nothing to do".

The file "mysql-connector-java.jar" is downloaded fromhttps://dev.mysql.com/downloads/connector/j/5.1.html

Then I added it to "/usr/share/java" and executed:

# ls -al /usr/share/java/mysql-connector-java.jar

# chmod 644 /usr/share/java/mysql-connector-java.jar

# ls -l /var/lib/ambari-server/resources/mysql-connector-java.jar

# ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar

I cannot login to MySQL. I tried to generate password "grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1" and then do "/usr/bin/mysql_secure_installation", but I get:

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

avatar
Super Collaborator

can you try running "systemctl status mysqld.service"?

It seems that your mysql daemon is not up, and this will also let mysql_upgrade fail. I think this entry from the log could point to the issue:


/usr/sbin/mysqld: Unknown storage engine 'InnoDB'

avatar
Master Mentor

@Liana Napalkova

You are trying to add have service without a database to host the catalog.

Can you access mysql database as root? if so proceed with the below steps if not execute /usr/bin/mysql_secure_installation and provide the inputs for the prompts

Here I am assuming the root password is welcome1 and the hive password id hivepwd

mysql -u root -pwelcome1 
CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hivepwd'; 
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost'; 
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%'; 
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'FQDN_MYSQL_HOST'; 
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'localhost' IDENTIFIED BY 'hivepwd' WITH GRANT OPTION; 
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'FQDN_MYSQL_HOST' IDENTIFIED BY 'hive' WITH GRANT OPTION; 
FLUSH PRIVILEGES; quit; 

# The as hive user created above create the Hive database.

mysql -u hive -phivepwd 
CREATE DATABASE hive; 
quit; 

Now run the Ambari hive setup with the above credentials



avatar
Master Mentor

@Liana Napalkova

Is "eureambarislave1.local.eurecat.org" the valid hostname for your MySQL database server

java.sql.SQLException: Access denied for user 'hive'@'eureambarislave1.local.eurecat.org' (using password: YES)
There was a typo error in my previous command it should be

GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'FQDN_MYSQL_HOST' IDENTIFIED BY 'hivepwd' WITH GRANT OPTION;

instead of

GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'FQDN_MYSQL_HOST' IDENTIFIED BY 'hive' WITH GRANT OPTION;

Run that a root

# mysql -u root -pwelcome1
mysql> use hivedb;
mysql> GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'FQDN_MYSQL_HOST' IDENTIFIED BY 'hivepwd' WITH GRANT OPTION;
mysql> Flush privileges;

That should correct the issue of denied access

Please revert !

avatar
Contributor

Indeed I just changed "eureambarislave1.local.eurecat.org" with "localhost" during the Hive installation process. Hive was installed successfully without any alert. I assume that "localhost" also works fine.

avatar
Master Mentor

@Liana Napalkova

Good to know happy hadooping...