Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive Metastore trouble with jbdc mysql

avatar
Explorer

I have freshly installed Hive 1.2.1 with Ambari 2.5.1. I mainly want to use it with Spark2.

The "Start Hive Metastore" and "Restart Hive Metastore" operations consistently fail with some kind of connection error. I chose the "new mysql database" option when installing Hive. Using mysqladmin, I have verified that MySql is running. I checked that I have mysql connector jar installed in /usr/hdp/current/hive-client/lib/

The error from the the (re)restart Hive Metastore operation seems to indicate that "schematool -initSchema -dbType mysql" has a jdbc connection issue.

Here is much more detail from the error message.

Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 211, in <module> HiveMetastore().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute method(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 850, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 61, in start create_metastore_schema() File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 381, in create_metastore_schema user = params.hive_user File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:mysql://hadoop12.neocortix.com/hive?createDatabaseIfNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. SQL Error code: 0 org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80) at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133) at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:233) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:983) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:339) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2252) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2285) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2084) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:795) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76) ... 11 more Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:214) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:298) ... 26 more *** schemaTool failed ***

7 REPLIES 7

avatar
Master Mentor

@Michael Coffey

We see the following error:

Metastore connection URL: jdbc:mysql://hadoop12.xxxxx.com/hive?createDatabaseIfNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive 
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. 
Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. SQL Error code: 0 org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. 
    at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80) 
.
.
org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76) 
... 11 more Caused by: java.net.ConnectException: Connection refused (Connection refused) 
    at java.net.PlainSocketImpl.socketConnect(Native Method) 
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)

.
Basically this is MySQL connection issue. So you should check the couple of things first:

1). The machine where you are running the MySQL please check the following, If the MySQL is running:

# ssh root@hadoop12.neocortix.com
# ps -ef | grep mysql

2). Check if the Port of MySQL is opened or not? By Passing the MySQL server PID to the following command :

# netstat -tnlpa | grep `cat /var/run/mysqld/mysqld.pid`
tcp6       0      0 :::3306                 :::*                    LISTEN      1235/mysqld   

3). Check if using the following kind of Java Utility you are able to establish the JDBC connection to the MySQL from the ambari-server host AND from the same host where mySQL is installed?

# /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java.jar -Djava.library.path=/var/lib/ambari-agent/cache org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://hadoop12.xxxxx.com/hive" "hive" "hive" com.mysql.jdbc.Driver

**NOTE:** The JAR "DBConnectionVerification.jar" is provided by Ambari for DB connection check. You will need to change the JAVA path in the above command.

Also please use the correct Hostname (and change : hadoop12.xxxxx.com to correct one) to verify the connectivity. (I have masked the hostname)

Above will isolate the issue of MySQL DB connectivity.


Usually the "CommunicationsException : Communications link failure" error indicates that there is either some MySQL DB server issue or the N/W issue.

.

.

avatar
Explorer

Thanks for the suggestions, Jay @Jay SenSharma. Steps 1 and 2 work fine -- mysqld is definitely running and listening on port port 3306. However the DBConnectionVerification test fails, saying

ERROR: Unable to connect to the DB. Please check DB connection properties.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.

avatar
Master Mentor

@Michael Coffey

Can you please check and share the output of "mysqld.log" file so that we see if it has some issues.

# less /var/log/mysqld.log 

.

avatar
Explorer

hadoop12:~$ less /var/log/mysqld.log
/var/log/mysqld.log: No such file or directory

avatar
Master Mentor

@Michael Coffey

The mysql log location is usually defined inside the MySQL config file "/etc/my.cnf"

# grep 'log-error' /etc/my.cnf
log-error=/var/log/mysqld.log

.

MySQL log can be really helpful to findout why it is failing with Communications link failure

avatar
Explorer

2017-10-02T19:56:25.586238Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-10-02T19:56:25.587415Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.19-0ubuntu0.16.04.1) starting as process 13964 ...
2017-10-02T19:56:25.591310Z 0 [Note] InnoDB: PUNCH HOLE support available
2017-10-02T19:56:25.591328Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-10-02T19:56:25.591340Z 0 [Note] InnoDB: Uses event mutexes
2017-10-02T19:56:25.591348Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-10-02T19:56:25.591354Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8
2017-10-02T19:56:25.591361Z 0 [Note] InnoDB: Using Linux native AIO
2017-10-02T19:56:25.591570Z 0 [Note] InnoDB: Number of pools: 1
2017-10-02T19:56:25.591677Z 0 [Note] InnoDB: Using CPU crc32 instructions
2017-10-02T19:56:25.592686Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2017-10-02T19:56:25.597336Z 0 [Note] InnoDB: Completed initialization of buffer pool
2017-10-02T19:56:25.598489Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2017-10-02T19:56:25.611131Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2017-10-02T19:56:25.622987Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2017-10-02T19:56:25.623057Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2017-10-02T19:56:25.627648Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2017-10-02T19:56:25.628231Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2017-10-02T19:56:25.628253Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2017-10-02T19:56:25.628499Z 0 [Note] InnoDB: Waiting for purge to start
2017-10-02T19:56:25.678705Z 0 [Note] InnoDB: 5.7.19 started; log sequence number 2540099
2017-10-02T19:56:25.679048Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2017-10-02T19:56:25.679091Z 0 [Note] Plugin 'FEDERATED' is disabled.
2017-10-02T19:56:25.684400Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
2017-10-02T19:56:25.684428Z 0 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
2017-10-02T19:56:25.684440Z 0 [Note] - '127.0.0.1' resolves to '127.0.0.1';
2017-10-02T19:56:25.684464Z 0 [Note] Server socket created on IP: '127.0.0.1'.
2017-10-02T19:56:25.684880Z 0 [Note] InnoDB: Buffer pool(s) load completed at 171002 12:56:25
2017-10-02T19:56:25.711359Z 0 [Note] Event Scheduler: Loaded 0 events
2017-10-02T19:56:25.711482Z 0 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.7.19-0ubuntu0.16.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu)
2017-10-02T19:56:25.711503Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
2017-10-02T19:56:25.711510Z 0 [Note] Beginning of list of non-natively partitioned tables
2017-10-02T19:56:25.738694Z 0 [Note] End of list of non-natively partitioned tables
2017-10-02T20:38:58.938453Z 4 [Note] Access denied for user 'root'@'localhost'
2017-10-02T22:41:51.485528Z 0 [Note] Giving 0 client threads a chance to die gracefully
2017-10-02T22:41:51.485632Z 0 [Note] Shutting down slave threads
2017-10-02T22:41:51.485654Z 0 [Note] Forcefully disconnecting 0 remaining clients
2017-10-02T22:41:51.485690Z 0 [Note] Event Scheduler: Purging the queue. 0 events
2017-10-02T22:41:51.485962Z 0 [Note] Binlog end
2017-10-02T22:41:51.487831Z 0 [Note] Shutting down plugin 'auth_socket'
2017-10-02T22:41:51.487861Z 0 [Note] Shutting down plugin 'ngram'
2017-10-02T22:41:51.487869Z 0 [Note] Shutting down plugin 'partition'
2017-10-02T22:41:51.487881Z 0 [Note] Shutting down plugin 'BLACKHOLE'
2017-10-02T22:41:51.487888Z 0 [Note] Shutting down plugin 'ARCHIVE'
2017-10-02T22:41:51.487894Z 0 [Note] Shutting down plugin 'CSV'

avatar
Master Mentor

@Michael Coffey

The problem seems to be this:

2017-10-02T19:56:25.684428Z 0 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
2017-10-02T19:56:25.684440Z 0 [Note] - '127.0.0.1' resolves to '127.0.0.1';
2017-10-02T19:56:25.684464Z 0 [Note] Server socket created on IP: '127.0.0.1'.

.

Your MySQL is starting and listening only on "127.0.0.1" bind_addr.

You should edit the "bind-address" attribute inside your "/etc/my.cnf" to make it bind on hostname or all listen address.

bind-address=0.0.0.0

.

https://dev.mysql.com/doc/refman/5.7/en/server-options.html

  • If the address is0.0.0.0, the server accepts TCP/IP connections on all server host IPv4 interfaces.
  • If the address is::, the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces.