Support Questions

Find answers, ask questions, and share your expertise

Spark2 history server upgrade points toward wrong node for hive metastore

avatar
Contributor

I'm upgrading ambari and HDP from 2.6.5 to 3.0.1 and currently struggling with Spark2 history server as the upgrade file points toward the wrong node for hive metastore.

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/job_history_server.py", line 102, in <module>
    JobHistoryServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
    method(env)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 993, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/job_history_server.py", line 55, in start
    spark_service('jobhistoryserver', upgrade_type=upgrade_type, action='start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_service.py", line 106, in spark_service
    user = params.hive_user)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
    returns=self.resource.returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hdp/current/hive-client/bin/schematool -dbType mysql -createCatalog spark -catalogDescription 'Default catalog, for Spark' -ifNotExists -catalogLocation hdfs://hadoopmaster.****.****:8020/apps/spark/warehouse' returned 1. /usr/hdp/3.0.1.0-187/hive/conf/hive-env.sh: line 48: [: !=: unary operator expected
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Create catalog spark at location hdfs://hadoopmaster.****.****:8020/apps/spark/warehouse
Metastore connection URL:	 jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true
Metastore Connection Driver :	 com.mysql.jdbc.Driver
Metastore connection User:	 hive
Fri Oct 19 14:17:44 CEST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: java.sql.SQLException : Access denied for user 'hive'@'localhost' (using password: YES)
SQL Error code: 1045
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

The hive metastore is on the node hadoopslave01 and I have no problem to connect to it using the user 'hive'@'localhost'. It seems that Spark has the wrong adress and goes for the node hadoopmaster. To test this hypothesis I created a mysql user 'hive'@'localhost' on hadoopmaster and tried to resume the upgrade which this time created a new (empty) hive metastore on hadoopmaster (and proceed to crash because of missing tables).

In both /usr/hdp/2.6.5.0-292/spark2/conf and /usr/hdp/3.0.1.0-187/spark2/conf/ hive-site.xml reads:

<name>hive.metastore.uris</name>
      <value>thrift://hadoopslave01.*****.*****:9083</value>

I'd say that the problem comes from the following line in the script but I can't find where to change it:

Metastore connection URL:	 jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true
1 ACCEPTED SOLUTION

avatar
Contributor

Found it.

In Ambari > Hive > Config > Database I had:

Database URL: jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true. 

You need to change localhost for the host you are trying to connect to, in my case:

Database URL: jdbc:mysql://hadoopslave01.*****.*****/metastore?createDatabaseIfNotExist=true.

You will probably need to change the bind-adress in your mysql configuration file as well:

On ubuntu go to /etc/mysql/mysql.conf.d/mysqld.cnf and change

bind-adress: 127.0.0.1 

to

bind-adress: 0.0.0.0

Then restart mysql (on Ubuntu service mysql restart) and in Ambari > Hive > Config > Database test your connection to the metastore.

View solution in original post

2 REPLIES 2

avatar
Contributor

I found this document explaining how to connect spark to hive metastore. I added the following lines in Ambari in custom spark2-default.conf as well as in /usr/hdp/2.6.5.0-292/spark2/conf/spark-default.conf and /usr/hdp/3.0.1.0-187/spark2/conf/spark-default.conf:

spark.sql.hive.hiveserver2.jdbc.url: jdbc:hive2://hadoopslave01.*****.*****:2181,hadoopmaster.*****.*****:2181,hadoopslave02.*****.*****:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
spark.datasource.hive.warehouse.metastoreUri: thrift://hadoopslave01.*****.*****:9083 
spark.datasource.hive.warehouse.load.staging.dir: /tmp/hive
spark.hadoop.hive.llap.daemon.service.hosts: @llap0 
spark.hadoop.hive.zookeeper.quorum: hadoopslave01.*****.*****:2181,hadoopmaster.*****.*****:2181,hadoopslave02.*****.*****:2181

Still the same error. The upgrade tries to find the metastore on hadoopmaster instead of hadoopslave01

avatar
Contributor

Found it.

In Ambari > Hive > Config > Database I had:

Database URL: jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true. 

You need to change localhost for the host you are trying to connect to, in my case:

Database URL: jdbc:mysql://hadoopslave01.*****.*****/metastore?createDatabaseIfNotExist=true.

You will probably need to change the bind-adress in your mysql configuration file as well:

On ubuntu go to /etc/mysql/mysql.conf.d/mysqld.cnf and change

bind-adress: 127.0.0.1 

to

bind-adress: 0.0.0.0

Then restart mysql (on Ubuntu service mysql restart) and in Ambari > Hive > Config > Database test your connection to the metastore.