Support Questions
Find answers, ask questions, and share your expertise

hive metastore upgrade from EMR installation (2.3.0) to hdp 3.1.0

New Contributor


I am trying to upgrade hive emr metastore installation (2.3.0) to hdp 3.1.0. I installed hdp 3.1.0 on EC2 machine and Using the scripts provided in hdp 3.1.0 trying to upgrade the EMR metastore (which is 2.3.0).

Ran an upgrade from 2.1.0 to 3.0.0 and then 3.0.0 to 3.1.0. Created the missing tables manually looking at upgrade script provided.

Now the issue is that when I try to start hiveserver2 from ambari it is trying to initSchema and failling saying that table bucketing* already exists.

Is there a way i can skip the initSchema?

Are there any better approach for this upgrade?



Exception stack trace for ambari hiveserver2 start:

Traceback (most recent call last):
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/", line 995, in restart
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/", line 87, in status
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/", line 43, in check_process_status
    raise ComponentIsNotRunning()

The above exception was the cause of the following exception:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/", line 201, in <module>
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/", line 352, in execute
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/", line 1006, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/", line 61, in start
    create_metastore_schema() # execute without config lock
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/", line 487, in create_metastore_schema
    user = params.hive_user
  File "/usr/lib/ambari-agent/lib/resource_management/core/", line 166, in __init__
  File "/usr/lib/ambari-agent/lib/resource_management/core/", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/", line 124, in run_action
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/", line 263, in action_run
  File "/usr/lib/ambari-agent/lib/resource_management/core/", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName emr_db_master -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:     jdbc:mysql://