Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

I can't start spark2 service..

avatar
Explorer
stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/job_history_server.py", line 102, in JobHistoryServer().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/job_history_server.py", line 55, in start spark_service('jobhistoryserver', upgrade_type=upgrade_type, action='start') File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_service.py", line 106, in spark_service user = params.hive_user) File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run returns=self.resource.returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hdp/current/hive-client/bin/schematool -dbType mysql -createCatalog spark -catalogDescription 'Default catalog, for Spark' -ifNotExists -catalogLocation hdfs://master.knu.com:8020/apps/spark/warehouse' returned 1. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Create catalog spark at location hdfs://master.knu.com:8020/apps/spark/warehouse Metastore connection URL: jdbc:mysql://slave1.knu.com/hive?createDatabaseIfNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. SQL Error code: 0 Use --verbose for detailed stacktrace. *** schemaTool failed *** stdout: 2018-09-05 11:49:55,418 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634 2018-09-05 11:49:55,530 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf 2018-09-05 11:49:56,372 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634 2018-09-05 11:49:56,423 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf 2018-09-05 11:49:56,424 - Group['kms'] {} 2018-09-05 11:49:56,425 - Group['livy'] {} 2018-09-05 11:49:56,425 - Group['spark'] {} 2018-09-05 11:49:56,426 - Group['ranger'] {} 2018-09-05 11:49:56,426 - Group['hdfs'] {} 2018-09-05 11:49:56,426 - Group['zeppelin'] {} 2018-09-05 11:49:56,426 - Group['hadoop'] {} 2018-09-05 11:49:56,426 - Group['users'] {} 2018-09-05 11:49:56,426 - Group['knox'] {} 2018-09-05 11:49:56,427 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,428 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,429 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,430 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,431 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,431 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,474 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None} 2018-09-05 11:49:56,475 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None} 2018-09-05 11:49:56,476 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,477 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2018-09-05 11:49:56,477 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,478 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,479 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,480 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-09-05 11:49:56,481 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-09-05 11:49:56,482 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None} 2018-09-05 11:49:56,483 - User['logsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,484 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2018-09-05 11:49:56,512 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,513 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-09-05 11:49:56,514 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,515 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2018-09-05 11:49:56,516 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,516 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,517 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-09-05 11:49:56,518 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None} 2018-09-05 11:49:56,519 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-09-05 11:49:56,520 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2018-09-05 11:49:56,695 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2018-09-05 11:49:56,696 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2018-09-05 11:49:56,696 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-09-05 11:49:56,698 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-09-05 11:49:56,698 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2018-09-05 11:49:56,870 - call returned (0, '1011') 2018-09-05 11:49:56,871 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1011'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2018-09-05 11:49:56,996 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1011'] due to not_if 2018-09-05 11:49:56,996 - Group['hdfs'] {} 2018-09-05 11:49:56,997 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2018-09-05 11:49:56,997 - FS Type: HDFS 2018-09-05 11:49:56,997 - Directory['/etc/hadoop'] {'mode': 0755} 2018-09-05 11:49:57,041 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2018-09-05 11:49:57,041 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2018-09-05 11:49:57,123 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2018-09-05 11:49:57,395 - Skipping Execute[('setenforce', '0')] due to not_if 2018-09-05 11:49:57,396 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2018-09-05 11:49:57,397 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2018-09-05 11:49:57,398 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2018-09-05 11:49:57,398 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2018-09-05 11:49:57,401 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2018-09-05 11:49:57,402 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2018-09-05 11:49:57,431 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2018-09-05 11:49:57,438 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2018-09-05 11:49:57,487 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2018-09-05 11:49:57,488 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2018-09-05 11:49:57,491 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2018-09-05 11:49:57,534 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2018-09-05 11:49:57,593 - Skipping unlimited key JCE policy check and setup since it is not required 2018-09-05 11:49:58,993 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf 2018-09-05 11:49:59,052 - Directory['/var/run/spark2'] {'owner': 'spark', 'create_parents': True, 'group': 'hadoop', 'mode': 0775} 2018-09-05 11:49:59,053 - Directory['/var/log/spark2'] {'owner': 'spark', 'group': 'hadoop', 'create_parents': True, 'mode': 0775} 2018-09-05 11:49:59,053 - HdfsResource['/user/spark'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'spark', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0775} 2018-09-05 11:49:59,055 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/user/spark?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp1edMkC 2>/tmp/tmp79pBvj''] {'logoutput': None, 'quiet': False} 2018-09-05 11:50:00,150 - call returned (0, '') 2018-09-05 11:50:00,150 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":17406,"group":"hdfs","length":0,"modificationTime":1536115798636,"owner":"spark","pathSuffix":"","permission":"775","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2018-09-05 11:50:00,151 - HdfsResource['/apps/spark/warehouse'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'spark', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0777} 2018-09-05 11:50:00,151 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/apps/spark/warehouse?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpUp29T5 2>/tmp/tmppR9TAf''] {'logoutput': None, 'quiet': False} 2018-09-05 11:50:00,967 - call returned (0, '') 2018-09-05 11:50:00,968 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":17408,"group":"hdfs","length":0,"modificationTime":1536115799157,"owner":"spark","pathSuffix":"","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2018-09-05 11:50:00,968 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2018-09-05 11:50:00,973 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2018-09-05 11:50:00,973 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-spark2.json 2018-09-05 11:50:00,974 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-spark2.json'] {'content': Template('input.config-spark2.json.j2'), 'mode': 0644} 2018-09-05 11:50:00,974 - PropertiesFile['/usr/hdp/current/spark2-historyserver/conf/spark-defaults.conf'] {'owner': 'spark', 'key_value_delimiter': ' ', 'group': 'spark', 'mode': 0644, 'properties': ...} 2018-09-05 11:50:00,995 - Generating properties file: /usr/hdp/current/spark2-historyserver/conf/spark-defaults.conf 2018-09-05 11:50:00,996 - File['/usr/hdp/current/spark2-historyserver/conf/spark-defaults.conf'] {'owner': 'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644, 'encoding': 'UTF-8'} 2018-09-05 11:50:01,093 - Writing File['/usr/hdp/current/spark2-historyserver/conf/spark-defaults.conf'] because contents don't match 2018-09-05 11:50:01,096 - File['/usr/hdp/current/spark2-historyserver/conf/spark-env.sh'] {'content': InlineTemplate(...), 'owner': 'spark', 'group': 'spark', 'mode': 0644} 2018-09-05 11:50:01,096 - Writing File['/usr/hdp/current/spark2-historyserver/conf/spark-env.sh'] because contents don't match 2018-09-05 11:50:01,171 - File['/usr/hdp/current/spark2-historyserver/conf/log4j.properties'] {'content': ..., 'owner': 'spark', 'group': 'spark', 'mode': 0644} 2018-09-05 11:50:01,173 - File['/usr/hdp/current/spark2-historyserver/conf/metrics.properties'] {'content': InlineTemplate(...), 'owner': 'spark', 'group': 'spark', 'mode': 0644} 2018-09-05 11:50:01,174 - XmlConfig['hive-site.xml'] {'owner': 'spark', 'group': 'spark', 'mode': 0644, 'conf_dir': '/usr/hdp/current/spark2-historyserver/conf', 'configurations': ...} 2018-09-05 11:50:01,180 - Generating config: /usr/hdp/current/spark2-historyserver/conf/hive-site.xml 2018-09-05 11:50:01,180 - File['/usr/hdp/current/spark2-historyserver/conf/hive-site.xml'] {'owner': 'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644, 'encoding': 'UTF-8'} 2018-09-05 11:50:01,254 - PropertiesFile['/usr/hdp/current/spark2-historyserver/conf/spark-thrift-sparkconf.conf'] {'owner': 'hive', 'key_value_delimiter': ' ', 'group': 'hadoop', 'mode': 0644, 'properties': ...} 2018-09-05 11:50:01,257 - Generating properties file: /usr/hdp/current/spark2-historyserver/conf/spark-thrift-sparkconf.conf 2018-09-05 11:50:01,257 - File['/usr/hdp/current/spark2-historyserver/conf/spark-thrift-sparkconf.conf'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2018-09-05 11:50:01,424 - Writing File['/usr/hdp/current/spark2-historyserver/conf/spark-thrift-sparkconf.conf'] because contents don't match 2018-09-05 11:50:01,427 - File['/usr/hdp/current/spark2-historyserver/conf/spark-thrift-fairscheduler.xml'] {'content': InlineTemplate(...), 'owner': 'spark', 'group': 'spark', 'mode': 0755} 2018-09-05 11:50:01,429 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634 2018-09-05 11:50:01,429 - Tarball version was calcuated as 3.0.0.0-1634. Use Command Version: True 2018-09-05 11:51:05,230 - Called copy_to_hdfs tarball: spark2 2018-09-05 11:51:05,230 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634 2018-09-05 11:51:05,230 - Tarball version was calcuated as 3.0.0.0-1634. Use Command Version: True 2018-09-05 11:51:05,230 - Source file: /tmp/spark2/spark2-hdp-yarn-archive.tar.gz , Dest file in HDFS: /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz 2018-09-05 11:51:05,230 - HdfsResource['/hdp/apps/3.0.0.0-1634/spark2'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2018-09-05 11:51:05,278 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpz7Dtfv 2>/tmp/tmpNQetHb''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:06,615 - call returned (0, '') 2018-09-05 11:51:06,615 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /hdp/apps/3.0.0.0-1634/spark2"}}404', u'') 2018-09-05 11:51:06,616 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2?op=MKDIRS&user.name=hdfs'"'"' 1>/tmp/tmpephO5j 2>/tmp/tmp6_WUUs''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:07,848 - call returned (0, '') 2018-09-05 11:51:07,848 - get_user_call_output returned (0, u'{"boolean":true}200', u'') 2018-09-05 11:51:07,849 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2?op=SETPERMISSION&user.name=hdfs&permission=555'"'"' 1>/tmp/tmpZ7rpnL 2>/tmp/tmptJ9wUa''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:09,026 - call returned (0, '') 2018-09-05 11:51:09,026 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:09,027 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2?op=SETOWNER&owner=hdfs&group=&user.name=hdfs'"'"' 1>/tmp/tmpKbM_6z 2>/tmp/tmp2nIvQG''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:10,120 - call returned (0, '') 2018-09-05 11:51:10,120 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:10,121 - HdfsResource['/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'source': '/tmp/spark2/spark2-hdp-yarn-archive.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'replace_existing_files': True, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2018-09-05 11:51:10,122 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpoIUAbW 2>/tmp/tmpEh3shC''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:11,496 - call returned (0, '') 2018-09-05 11:51:11,496 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz"}}404', u'') 2018-09-05 11:51:11,497 - Creating new file /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz in DFS 2018-09-05 11:51:11,498 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/tmp/spark2/spark2-hdp-yarn-archive.tar.gz -H '"'"'Content-Type: application/octet-stream'"'"' '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'"'"' 1>/tmp/tmpW64eOo 2>/tmp/tmpFZhZH6''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:19,830 - call returned (0, '') 2018-09-05 11:51:19,830 - get_user_call_output returned (0, u'201', u'') 2018-09-05 11:51:19,831 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz?op=SETPERMISSION&user.name=hdfs&permission=444'"'"' 1>/tmp/tmp02gLvd 2>/tmp/tmpIPHBkn''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:21,672 - call returned (0, '') 2018-09-05 11:51:21,672 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:21,673 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz?op=SETOWNER&owner=hdfs&group=hadoop&user.name=hdfs'"'"' 1>/tmp/tmple4yDD 2>/tmp/tmp_5a28V''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:22,503 - call returned (0, '') 2018-09-05 11:51:22,503 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:22,503 - Will attempt to copy spark2 tarball from /tmp/spark2/spark2-hdp-yarn-archive.tar.gz to DFS at /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz. 2018-09-05 11:51:22,503 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634 2018-09-05 11:51:22,503 - Tarball version was calcuated as 3.0.0.0-1634. Use Command Version: True 2018-09-05 11:51:32,066 - Called copy_to_hdfs tarball: spark2hive 2018-09-05 11:51:32,066 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634 2018-09-05 11:51:32,066 - Tarball version was calcuated as 3.0.0.0-1634. Use Command Version: True 2018-09-05 11:51:32,066 - Source file: /tmp/spark2/spark2-hdp-hive-archive.tar.gz , Dest file in HDFS: /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz 2018-09-05 11:51:32,067 - HdfsResource['/hdp/apps/3.0.0.0-1634/spark2'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2018-09-05 11:51:32,068 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpx1kovB 2>/tmp/tmpcsoGo2''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:33,386 - call returned (0, '') 2018-09-05 11:51:33,386 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17426,"group":"hdfs","length":0,"modificationTime":1536115874374,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2018-09-05 11:51:33,387 - HdfsResource['/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'source': '/tmp/spark2/spark2-hdp-hive-archive.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'replace_existing_files': True, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2018-09-05 11:51:33,388 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp8jdktw 2>/tmp/tmp0AaRRh''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:34,857 - call returned (0, '') 2018-09-05 11:51:34,857 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz"}}404', u'') 2018-09-05 11:51:34,857 - Creating new file /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz in DFS 2018-09-05 11:51:34,858 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/tmp/spark2/spark2-hdp-hive-archive.tar.gz -H '"'"'Content-Type: application/octet-stream'"'"' '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'"'"' 1>/tmp/tmp2XUKZL 2>/tmp/tmpUNgUS7''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:37,761 - call returned (0, '') 2018-09-05 11:51:37,761 - get_user_call_output returned (0, u'201', u'') 2018-09-05 11:51:37,762 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz?op=SETPERMISSION&user.name=hdfs&permission=444'"'"' 1>/tmp/tmpMlitMr 2>/tmp/tmpJUH6S6''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:38,290 - call returned (0, '') 2018-09-05 11:51:38,291 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:38,292 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz?op=SETOWNER&owner=hdfs&group=hadoop&user.name=hdfs'"'"' 1>/tmp/tmpCXoyTh 2>/tmp/tmpNKJFP4''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:39,734 - call returned (0, '') 2018-09-05 11:51:39,734 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:39,734 - Will attempt to copy spark2hive tarball from /tmp/spark2/spark2-hdp-hive-archive.tar.gz to DFS at /hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-hive-archive.tar.gz. 2018-09-05 11:51:39,735 - HdfsResource['hdfs:///spark2-history/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'spark', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0777} 2018-09-05 11:51:39,736 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/spark2-history/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp2uNUba 2>/tmp/tmpJPCy9p''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:41,168 - call returned (0, '') 2018-09-05 11:51:41,168 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /spark2-history/"}}404', u'') 2018-09-05 11:51:41,169 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/spark2-history/?op=MKDIRS&user.name=hdfs'"'"' 1>/tmp/tmpT_YFxS 2>/tmp/tmpan9eGR''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:42,472 - call returned (0, '') 2018-09-05 11:51:42,472 - get_user_call_output returned (0, u'{"boolean":true}200', u'') 2018-09-05 11:51:42,473 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/spark2-history/?op=SETPERMISSION&user.name=hdfs&permission=777'"'"' 1>/tmp/tmpOP1v5M 2>/tmp/tmpejuGHK''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:43,091 - call returned (0, '') 2018-09-05 11:51:43,091 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:43,092 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/spark2-history/?op=GETCONTENTSUMMARY&user.name=hdfs'"'"' 1>/tmp/tmpho0h5E 2>/tmp/tmpBEwEKH''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:43,828 - call returned (0, '') 2018-09-05 11:51:43,829 - get_user_call_output returned (0, u'{"ContentSummary":{"directoryCount":1,"fileCount":0,"length":0,"quota":-1,"spaceConsumed":0,"spaceQuota":-1,"typeQuota":{}}}200', u'') 2018-09-05 11:51:43,830 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.knu.com:50070/webhdfs/v1/spark2-history/?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpksIOEF 2>/tmp/tmpqXcoGI''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:44,834 - call returned (0, '') 2018-09-05 11:51:44,834 - get_user_call_output returned (0, u'{"FileStatuses":{"FileStatus":[\n\n]}}\n200', u'') 2018-09-05 11:51:44,835 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://master.knu.com:50070/webhdfs/v1/spark2-history/?op=SETOWNER&owner=spark&group=hadoop&user.name=hdfs'"'"' 1>/tmp/tmpYTKNqc 2>/tmp/tmpMkS9qH''] {'logoutput': None, 'quiet': False} 2018-09-05 11:51:45,455 - call returned (0, '') 2018-09-05 11:51:45,455 - get_user_call_output returned (0, u'200', u'') 2018-09-05 11:51:45,455 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.knu.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2018-09-05 11:51:45,458 - Execute['/usr/hdp/current/hive-client/bin/schematool -dbType mysql -createCatalog spark -catalogDescription 'Default catalog, for Spark' -ifNotExists -catalogLocation hdfs://master.knu.com:8020/apps/spark/warehouse'] {'user': 'hive'} Command failed after 1 tries

What is the problem..?

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Taehyeon Lee

The problem seems to be with your MySQL Database. As we see the following cause for the failure.

Metastore connection URL: jdbc:mysql://slave1.xxxxxx.com/hive?createDatabaseIfNotExist=true 
Metastore Connection Driver : com.mysql.jdbc.Driver 
Metastore connection User: hive 
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. 
Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : 
Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. 
The driver has not received any packets from the server. SQL Error code: 0 Use --verbose for detailed 
stacktrace. schemaTool failed



Then "Communications link failure" error indicates that your MySQL database might not be running or it might have some communication issue.
So please check if the MySQL server is running on the mentioned host "slave1.xxxxxx.com"? And are we able to access its default port (3306) from the Spark Host?

# telnet slave1.xxxxx.com 3306

.

You should also check & edit the "bind-address" attribute inside your "/etc/my.cnf" to make it bind on hostname or all listen address.

bind-address=0.0.0.0

.

https://dev.mysql.com/doc/refman/5.7/en/server-options.html

  • If the address is0.0.0.0, the server accepts TCP/IP connections on all server host IPv4 interfaces.
  • If the address is::, the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces.

Also on the MySQL server host please check if the port 3306 is listening And MySQL is running fine? Try restarting MySQl service. Check if the Firewall is disabled on mySQL server host?

# netstat -tnlpa | grep 3306
# service iptables stop
# systemctl disable firewalld

.

Some more details about troubleshooting the "Communications link failure" can be found here: https://community.hortonworks.com/questions/139703/hive-metastore-trouble-with-jbdc-mysql.html

View solution in original post

1 REPLY 1

avatar
Master Mentor

@Taehyeon Lee

The problem seems to be with your MySQL Database. As we see the following cause for the failure.

Metastore connection URL: jdbc:mysql://slave1.xxxxxx.com/hive?createDatabaseIfNotExist=true 
Metastore Connection Driver : com.mysql.jdbc.Driver 
Metastore connection User: hive 
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. 
Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : 
Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. 
The driver has not received any packets from the server. SQL Error code: 0 Use --verbose for detailed 
stacktrace. schemaTool failed



Then "Communications link failure" error indicates that your MySQL database might not be running or it might have some communication issue.
So please check if the MySQL server is running on the mentioned host "slave1.xxxxxx.com"? And are we able to access its default port (3306) from the Spark Host?

# telnet slave1.xxxxx.com 3306

.

You should also check & edit the "bind-address" attribute inside your "/etc/my.cnf" to make it bind on hostname or all listen address.

bind-address=0.0.0.0

.

https://dev.mysql.com/doc/refman/5.7/en/server-options.html

  • If the address is0.0.0.0, the server accepts TCP/IP connections on all server host IPv4 interfaces.
  • If the address is::, the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces.

Also on the MySQL server host please check if the port 3306 is listening And MySQL is running fine? Try restarting MySQl service. Check if the Firewall is disabled on mySQL server host?

# netstat -tnlpa | grep 3306
# service iptables stop
# systemctl disable firewalld

.

Some more details about troubleshooting the "Communications link failure" can be found here: https://community.hortonworks.com/questions/139703/hive-metastore-trouble-with-jbdc-mysql.html