Support Questions
Find answers, ask questions, and share your expertise

Failed to start HiveServer2 using Ambari 2.7 and HDP 3.1

New Contributor

Changes that i tried but did not help:


  1. set home directory for java
  2. changed the warehouse root directory
  3. created the znode for hiveserver2 manually (but it returns empty set)
  4. reinstalled zookeeper and hive again

stderr: /var/lib/ambari-agent/data/errors-252.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py", line 143, in <module>
    HiveServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py", line 53, in start
    hive_service('hiveserver2', action = 'start', upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 101, in hive_service
    wait_for_znode()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/decorator.py", line 54, in wrapper
    return function(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 184, in wait_for_znode
    raise Exception(format("HiveServer2 is no longer running, check the logs a


stdout: /var/lib/ambari-agent/data/output-252.txt

2019-04-25 17:15:42,610 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:42,635 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-04-25 17:15:42,871 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:42,878 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-04-25 17:15:42,880 - Group['livy'] {}
2019-04-25 17:15:42,881 - Group['spark'] {}
2019-04-25 17:15:42,881 - Group['ranger'] {}
2019-04-25 17:15:42,882 - Group['hdfs'] {}
2019-04-25 17:15:42,882 - Group['zeppelin'] {}
2019-04-25 17:15:42,882 - Group['hadoop'] {}
2019-04-25 17:15:42,882 - Group['users'] {}
2019-04-25 17:15:42,883 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,884 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,885 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,886 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,888 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,889 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-04-25 17:15:42,890 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,891 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,892 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2019-04-25 17:15:42,893 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-04-25 17:15:42,894 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2019-04-25 17:15:42,895 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,896 - User['logsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,897 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2019-04-25 17:15:42,898 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2019-04-25 17:15:42,900 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-04-25 17:15:42,901 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,902 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-04-25 17:15:42,903 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,904 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,905 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,906 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-04-25 17:15:42,907 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-04-25 17:15:42,909 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-04-25 17:15:42,917 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-04-25 17:15:42,917 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-04-25 17:15:42,919 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-04-25 17:15:42,920 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-04-25 17:15:42,921 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-04-25 17:15:42,933 - call returned (0, '1022')
2019-04-25 17:15:42,935 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1022'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-04-25 17:15:42,941 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1022'] due to not_if
2019-04-25 17:15:42,942 - Group['hdfs'] {}
2019-04-25 17:15:42,943 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-04-25 17:15:42,943 - FS Type: HDFS
2019-04-25 17:15:42,943 - Directory['/etc/hadoop'] {'mode': 0755}
2019-04-25 17:15:42,964 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-04-25 17:15:42,965 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-04-25 17:15:42,987 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-04-25 17:15:43,096 - Skipping Execute[('setenforce', '0')] due to only_if
2019-04-25 17:15:43,097 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-04-25 17:15:43,103 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-04-25 17:15:43,105 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2019-04-25 17:15:43,106 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-04-25 17:15:43,116 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-04-25 17:15:43,150 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-04-25 17:15:43,187 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:43,220 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-04-25 17:15:43,237 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-04-25 17:15:43,328 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-04-25 17:15:43,338 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:43,364 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-04-25 17:15:43,379 - Skipping unlimited key JCE policy check and setup since it is not required
2019-04-25 17:15:43,993 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-04-25 17:15:44,005 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2019-04-25 17:15:44,035 - call returned (0, 'hive-server2 - 3.1.0.0-78')
2019-04-25 17:15:44,037 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:44,065 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://infra.toodevops.com:8080/resources/CredentialUtil.jar'), 'mode': 0755}
2019-04-25 17:15:44,067 - Not downloading the file from http://infra.toodevops.com:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists
2019-04-25 17:15:45,381 - Directories to fill with configs: [u'/usr/hdp/current/hive-server2/conf', u'/usr/hdp/current/hive-server2/conf/']
2019-04-25 17:15:45,381 - Directory['/etc/hive/3.1.0.0-78/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}
2019-04-25 17:15:45,383 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2019-04-25 17:15:45,399 - Generating config: /etc/hive/3.1.0.0-78/0/mapred-site.xml
2019-04-25 17:15:45,399 - File['/etc/hive/3.1.0.0-78/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-04-25 17:15:45,457 - File['/etc/hive/3.1.0.0-78/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,458 - File['/etc/hive/3.1.0.0-78/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2019-04-25 17:15:45,462 - File['/etc/hive/3.1.0.0-78/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,466 - File['/etc/hive/3.1.0.0-78/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,469 - File['/etc/hive/3.1.0.0-78/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,472 - File['/etc/hive/3.1.0.0-78/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,475 - File['/etc/hive/3.1.0.0-78/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,476 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://infra.toodevops.com:2181,infraha.toodevops.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}}
2019-04-25 17:15:45,487 - Generating config: /etc/hive/3.1.0.0-78/0/beeline-site.xml
2019-04-25 17:15:45,488 - File['/etc/hive/3.1.0.0-78/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-04-25 17:15:45,491 - File['/etc/hive/3.1.0.0-78/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,492 - Directory['/etc/hive/3.1.0.0-78/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}
2019-04-25 17:15:45,492 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2019-04-25 17:15:45,503 - Generating config: /etc/hive/3.1.0.0-78/0/mapred-site.xml
2019-04-25 17:15:45,504 - File['/etc/hive/3.1.0.0-78/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-04-25 17:15:45,561 - File['/etc/hive/3.1.0.0-78/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,562 - File['/etc/hive/3.1.0.0-78/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2019-04-25 17:15:45,567 - File['/etc/hive/3.1.0.0-78/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,570 - File['/etc/hive/3.1.0.0-78/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,573 - File['/etc/hive/3.1.0.0-78/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,576 - File['/etc/hive/3.1.0.0-78/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,578 - File['/etc/hive/3.1.0.0-78/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,579 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://infra.toodevops.com:2181,infraha.toodevops.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}}
2019-04-25 17:15:45,591 - Generating config: /etc/hive/3.1.0.0-78/0/beeline-site.xml
2019-04-25 17:15:45,591 - File['/etc/hive/3.1.0.0-78/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-04-25 17:15:45,594 - File['/etc/hive/3.1.0.0-78/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-04-25 17:15:45,595 - File['/usr/hdp/current/hive-server2/conf/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_server/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640}
2019-04-25 17:15:45,596 - Writing File['/usr/hdp/current/hive-server2/conf/hive-site.jceks'] because contents don't match
2019-04-25 17:15:45,597 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf/', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...}
2019-04-25 17:15:45,607 - Generating config: /usr/hdp/current/hive-server2/conf/hive-site.xml
2019-04-25 17:15:45,607 - File['/usr/hdp/current/hive-server2/conf/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-04-25 17:15:45,784 - Writing File['/usr/hdp/current/hive-server2/conf/hive-site.xml'] because contents don't match
2019-04-25 17:15:45,785 - Generating Atlas Hook config file /usr/hdp/current/hive-server2/conf/atlas-application.properties
2019-04-25 17:15:45,785 - PropertiesFile['/usr/hdp/current/hive-server2/conf/atlas-application.properties'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'properties': ...}
2019-04-25 17:15:45,790 - Generating properties file: /usr/hdp/current/hive-server2/conf/atlas-application.properties
2019-04-25 17:15:45,790 - File['/usr/hdp/current/hive-server2/conf/atlas-application.properties'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-04-25 17:15:45,807 - Writing File['/usr/hdp/current/hive-server2/conf/atlas-application.properties'] because contents don't match
2019-04-25 17:15:45,813 - File['/usr/hdp/current/hive-server2/conf//hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2019-04-25 17:15:45,813 - Writing File['/usr/hdp/current/hive-server2/conf//hive-env.sh'] because contents don't match
2019-04-25 17:15:45,814 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-04-25 17:15:45,817 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-04-25 17:15:45,818 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://infra.toodevops.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2019-04-25 17:15:45,819 - Not downloading the file from http://infra.toodevops.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2019-04-25 17:15:45,819 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-04-25 17:15:45,820 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-04-25 17:15:45,820 - Directory['/var/lib/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-04-25 17:15:45,830 - File['/var/lib/ambari-agent/tmp/start_hiveserver2_script'] {'content': Template('startHiveserver2.sh.j2'), 'mode': 0755}
2019-04-25 17:15:45,859 - File['/usr/hdp/current/hive-server2/conf/hadoop-metrics2-hiveserver2.properties'] {'content': Template('hadoop-metrics2-hiveserver2.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2019-04-25 17:15:45,870 - XmlConfig['hiveserver2-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf/', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2019-04-25 17:15:45,881 - Generating config: /usr/hdp/current/hive-server2/conf/hiveserver2-site.xml
2019-04-25 17:15:45,881 - File['/usr/hdp/current/hive-server2/conf/hiveserver2-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2019-04-25 17:15:45,893 - Called copy_to_hdfs tarball: mapreduce
2019-04-25 17:15:45,893 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:45,893 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:15:45,894 - Source file: /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz
2019-04-25 17:15:45,894 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:45,894 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:15:45,894 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555}
2019-04-25 17:15:45,898 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpIHx4bN 2>/tmp/tmpxqJSGm''] {'logoutput': None, 'quiet': False}
2019-04-25 17:15:46,067 - call returned (0, '')
2019-04-25 17:15:46,068 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16415,"group":"hdfs","length":0,"modificationTime":1556118410835,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:15:46,070 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444}
2019-04-25 17:15:46,072 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp3q5vl3 2>/tmp/tmpfbGZeF''] {'logoutput': None, 'quiet': False}
2019-04-25 17:15:46,235 - call returned (0, '')
2019-04-25 17:15:46,235 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1556117415821,"blockSize":134217728,"childrenNum":0,"fileId":16426,"group":"hadoop","length":308401145,"modificationTime":1556117419759,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'')
2019-04-25 17:15:46,237 - DFS file /hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz is identical to /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz, skipping the copying
2019-04-25 17:15:46,237 - Will attempt to copy mapreduce tarball from /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz to DFS at /hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz.
2019-04-25 17:15:46,237 - Called copy_to_hdfs tarball: tez
2019-04-25 17:15:46,237 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:46,238 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:15:46,238 - Source file: /usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/tez/tez.tar.gz
2019-04-25 17:15:46,238 - Preparing the Tez tarball...
2019-04-25 17:15:46,238 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:46,238 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:15:46,239 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:15:46,239 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:15:46,239 - Extracting /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz to /var/lib/ambari-agent/tmp/mapreduce-tarball-AKNIox
2019-04-25 17:15:46,240 - Execute[('tar', '-xf', u'/usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz', '-C', '/var/lib/ambari-agent/tmp/mapreduce-tarball-AKNIox/')] {'tries': 3, 'sudo': True, 'try_sleep': 1}
2019-04-25 17:15:53,050 - Extracting /usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz to /var/lib/ambari-agent/tmp/tez-tarball-E9LkDJ
2019-04-25 17:15:53,051 - Execute[('tar', '-xf', u'/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz', '-C', '/var/lib/ambari-agent/tmp/tez-tarball-E9LkDJ/')] {'tries': 3, 'sudo': True, 'try_sleep': 1}
2019-04-25 17:15:58,727 - Execute[('cp', '-a', '/var/lib/ambari-agent/tmp/mapreduce-tarball-AKNIox/hadoop/lib/native', '/var/lib/ambari-agent/tmp/tez-tarball-E9LkDJ/lib')] {'sudo': True}
2019-04-25 17:15:58,899 - Directory['/var/lib/ambari-agent/tmp/tez-tarball-E9LkDJ/lib'] {'recursive_ownership': True, 'mode': 0755, 'cd_access': 'a'}
2019-04-25 17:15:58,900 - Creating a new Tez tarball at /var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz
2019-04-25 17:15:58,901 - Execute[('tar', '-zchf', '/tmp/tmpno811A', '-C', '/var/lib/ambari-agent/tmp/tez-tarball-E9LkDJ', '.')] {'tries': 3, 'sudo': True, 'try_sleep': 1}
2019-04-25 17:16:16,988 - Execute[('mv', '/tmp/tmpno811A', '/var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz')] {}
2019-04-25 17:16:17,868 - HdfsResource['/hdp/apps/3.1.0.0-78/tez'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555}
2019-04-25 17:16:17,871 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/tez?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp6FxIsD 2>/tmp/tmpjwQErS''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:19,259 - call returned (0, '')
2019-04-25 17:16:19,259 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16433,"group":"hdfs","length":0,"modificationTime":1556117444130,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:19,262 - HdfsResource['/hdp/apps/3.1.0.0-78/tez/tez.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444}
2019-04-25 17:16:19,265 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/tez/tez.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpJ_MZ1g 2>/tmp/tmpR2Ys4u''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:19,408 - call returned (0, '')
2019-04-25 17:16:19,409 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1556117444130,"blockSize":134217728,"childrenNum":0,"fileId":16434,"group":"hadoop","length":254858532,"modificationTime":1556117446004,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'')
2019-04-25 17:16:19,410 - Not replacing existing DFS file /hdp/apps/3.1.0.0-78/tez/tez.tar.gz which is different from /var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz, due to replace_existing_files=False
2019-04-25 17:16:19,411 - Will attempt to copy tez tarball from /var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz to DFS at /hdp/apps/3.1.0.0-78/tez/tez.tar.gz.
2019-04-25 17:16:19,411 - Called copy_to_hdfs tarball: pig
2019-04-25 17:16:19,412 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:16:19,412 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:16:19,412 - pig-env is not present on the cluster. Skip copying /usr/hdp/3.1.0.0-78/pig/pig.tar.gz
2019-04-25 17:16:19,412 - Called copy_to_hdfs tarball: hive
2019-04-25 17:16:19,413 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:16:19,413 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:16:19,413 - Source file: /usr/hdp/3.1.0.0-78/hive/hive.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/hive/hive.tar.gz
2019-04-25 17:16:19,417 - HdfsResource['/hdp/apps/3.1.0.0-78/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555}
2019-04-25 17:16:19,420 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpeangH2 2>/tmp/tmpIK6nfs''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:19,557 - call returned (0, '')
2019-04-25 17:16:19,558 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17694,"group":"hdfs","length":0,"modificationTime":1556118404431,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:19,561 - HdfsResource['/hdp/apps/3.1.0.0-78/hive/hive.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/hive/hive.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444}
2019-04-25 17:16:19,564 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/hive/hive.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmptbu0YX 2>/tmp/tmpS8fxqG''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:19,704 - call returned (0, '')
2019-04-25 17:16:19,705 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1556118404431,"blockSize":134217728,"childrenNum":0,"fileId":17695,"group":"hadoop","length":363312102,"modificationTime":1556118407635,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'')
2019-04-25 17:16:19,706 - DFS file /hdp/apps/3.1.0.0-78/hive/hive.tar.gz is identical to /usr/hdp/3.1.0.0-78/hive/hive.tar.gz, skipping the copying
2019-04-25 17:16:19,707 - Will attempt to copy hive tarball from /usr/hdp/3.1.0.0-78/hive/hive.tar.gz to DFS at /hdp/apps/3.1.0.0-78/hive/hive.tar.gz.
2019-04-25 17:16:19,707 - Called copy_to_hdfs tarball: sqoop
2019-04-25 17:16:19,707 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:16:19,708 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:16:19,708 - Source file: /usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz
2019-04-25 17:16:19,723 - HdfsResource['/hdp/apps/3.1.0.0-78/sqoop'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555}
2019-04-25 17:16:19,726 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/sqoop?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpSuryao 2>/tmp/tmp3SD3xf''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:19,855 - call returned (0, '')
2019-04-25 17:16:19,856 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17696,"group":"hdfs","length":0,"modificationTime":1556118409393,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:19,858 - HdfsResource['/hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444}
2019-04-25 17:16:19,860 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpOE0Pht 2>/tmp/tmp6ezsyy''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:19,990 - call returned (0, '')
2019-04-25 17:16:19,991 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1556118409393,"blockSize":134217728,"childrenNum":0,"fileId":17697,"group":"hadoop","length":77261259,"modificationTime":1556118410012,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'')
2019-04-25 17:16:19,992 - DFS file /hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz is identical to /usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz, skipping the copying
2019-04-25 17:16:19,993 - Will attempt to copy sqoop tarball from /usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz to DFS at /hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz.
2019-04-25 17:16:19,993 - Called copy_to_hdfs tarball: hadoop_streaming
2019-04-25 17:16:19,993 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-04-25 17:16:19,994 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True
2019-04-25 17:16:19,994 - Source file: /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar , Dest file in HDFS: /hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar
2019-04-25 17:16:20,014 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555}
2019-04-25 17:16:20,016 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp0A7Wch 2>/tmp/tmpr6wkVO''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:20,155 - call returned (0, '')
2019-04-25 17:16:20,156 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16415,"group":"hdfs","length":0,"modificationTime":1556118410835,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:20,158 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444}
2019-04-25 17:16:20,160 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpOwd8tq 2>/tmp/tmpl5tpx7''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:20,325 - call returned (0, '')
2019-04-25 17:16:20,325 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1556118410835,"blockSize":134217728,"childrenNum":0,"fileId":17699,"group":"hadoop","length":176342,"modificationTime":1556118410880,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'')
2019-04-25 17:16:20,327 - DFS file /hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar is identical to /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar, skipping the copying
2019-04-25 17:16:20,327 - Will attempt to copy hadoop_streaming tarball from /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar to DFS at /hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar.
2019-04-25 17:16:20,328 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01755}
2019-04-25 17:16:20,331 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpHv8SZA 2>/tmp/tmp1e9n25''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:20,478 - call returned (0, '')
2019-04-25 17:16:20,479 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":46,"fileId":17411,"group":"hadoop","length":0,"modificationTime":1556118413452,"owner":"hive","pathSuffix":"","permission":"1755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:20,481 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/query_data/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2019-04-25 17:16:20,483 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/query_data/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmplK9QuQ 2>/tmp/tmph7VNEG''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:20,610 - call returned (0, '')
2019-04-25 17:16:20,610 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":17412,"group":"hadoop","length":0,"modificationTime":1556117682872,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:20,612 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/dag_meta'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2019-04-25 17:16:20,615 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/dag_meta?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpDU6pQE 2>/tmp/tmpsQfo4d''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:20,741 - call returned (0, '')
2019-04-25 17:16:20,742 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":17700,"group":"hadoop","length":0,"modificationTime":1556118412147,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:20,744 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/dag_data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2019-04-25 17:16:20,747 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/dag_data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpewir7_ 2>/tmp/tmp1tZC2W''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:20,873 - call returned (0, '')
2019-04-25 17:16:20,874 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":17701,"group":"hadoop","length":0,"modificationTime":1556118412808,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:20,876 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/app_data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2019-04-25 17:16:20,879 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://infra.toodevops.com:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/app_data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmplicaU4 2>/tmp/tmpkNy9C7''] {'logoutput': None, 'quiet': False}
2019-04-25 17:16:21,008 - call returned (0, '')
2019-04-25 17:16:21,009 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":2,"fileId":17702,"group":"hadoop","length":0,"modificationTime":1556162039260,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-04-25 17:16:21,011 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://infra.toodevops.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}
2019-04-25 17:16:21,093 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-04-25 17:16:21,094 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json
2019-04-25 17:16:21,095 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json'] {'content': Template('input.config-hive.json.j2'), 'mode': 0644}
2019-04-25 17:16:21,096 - Ranger Hive plugin is not enabled
2019-04-25 17:16:21,098 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive-server.pid 1>/tmp/tmp6VURUS 2>/tmp/tmpdOqSp5''] {'quiet': False}
2019-04-25 17:16:21,241 - call returned (0, '')
2019-04-25 17:16:21,241 - get_user_call_output returned (0, u'56793', u'')
2019-04-25 17:16:21,243 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/hdp/current/hive-server2/conf/ --service metatool -listFSRoot' 2>/dev/null | grep hdfs:// | cut -f1,2,3 -d '/' | grep -v 'hdfs://infra.toodevops.com:8020' | head -1'] {}
2019-04-25 17:16:50,419 - call returned (0, '')
2019-04-25 17:16:50,419 - Execute['/var/lib/ambari-agent/tmp/start_hiveserver2_script /var/log/hive/hive-server2.out /var/log/hive/hive-server2.err /var/run/hive/hive-server.pid /usr/hdp/current/hive-server2/conf/ /etc/tez/conf'] {'environment': {'HIVE_BIN': 'hive', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112', 'HADOOP_HOME': u'/usr/hdp/current/hadoop-client'}, 'not_if': 'ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p 56793 >/dev/null 2>&1', 'user': 'hive', 'path': [u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/var/lib/ambari-agent:/usr/hdp/current/hive-server2/bin:/usr/hdp/3.1.0.0-78/hadoop/bin']}
2019-04-25 17:16:50,695 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/hive-server2/lib/mysql-connector-java-5.1.47.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://compute.toodevops.com/hive?createDatabaseIfNotExist=true' bigmehr [PROTECTED] com.mysql.jdbc.Driver'] {'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2019-04-25 17:16:52,141 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:16:53,583 - call returned (1, '')
2019-04-25 17:16:53,584 - Will retry 29 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:17:03,603 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:17:04,719 - call returned (1, '')
2019-04-25 17:17:04,737 - Will retry 28 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:17:14,747 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:17:16,884 - call returned (1, '')
2019-04-25 17:17:16,934 - Will retry 27 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:17:26,945 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:17:28,665 - call returned (1, '')
2019-04-25 17:17:28,666 - Will retry 26 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:17:38,678 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:17:39,800 - call returned (1, '')
2019-04-25 17:17:39,802 - Will retry 25 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:17:49,812 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:17:51,546 - call returned (1, '')
2019-04-25 17:17:51,581 - Will retry 24 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:18:01,591 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:18:03,177 - call returned (1, '')
2019-04-25 17:18:03,178 - Will retry 23 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:18:13,194 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:18:14,074 - call returned (1, '')
2019-04-25 17:18:14,075 - Will retry 22 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:18:24,086 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:18:25,394 - call returned (1, '')
2019-04-25 17:18:25,396 - Will retry 21 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:18:35,407 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:18:36,669 - call returned (1, '')
2019-04-25 17:18:36,670 - Will retry 20 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:18:46,680 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:18:47,730 - call returned (1, '')
2019-04-25 17:18:47,732 - Will retry 19 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:18:57,744 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:18:58,132 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:18:58,144 - Will retry 18 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:19:08,154 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:19:10,209 - call returned (1, '')
2019-04-25 17:19:10,210 - Will retry 17 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:19:20,294 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:19:21,714 - call returned (1, '')
2019-04-25 17:19:21,731 - Will retry 16 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:19:31,742 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:19:33,071 - call returned (1, '')
2019-04-25 17:19:33,072 - Will retry 15 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:19:43,083 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:19:44,414 - call returned (1, '')
2019-04-25 17:19:44,415 - Will retry 14 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:19:54,433 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:19:54,750 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:19:54,752 - Will retry 13 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:20:04,762 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:20:06,670 - call returned (1, '')
2019-04-25 17:20:06,673 - Will retry 12 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:20:16,684 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:20:18,175 - call returned (1, '')
2019-04-25 17:20:18,176 - Will retry 11 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:20:28,187 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:20:28,546 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:20:28,547 - Will retry 10 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:20:38,556 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:20:38,758 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:20:38,759 - Will retry 9 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:20:48,770 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:20:49,114 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:20:49,115 - Will retry 8 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:20:59,127 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:20:59,416 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:20:59,436 - Will retry 7 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:21:09,445 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:21:09,948 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:21:09,959 - Will retry 6 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:21:19,970 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server infraha.toodevops.com:2181,infra.toodevops.com:2181 ls /hiveserver2 | grep 'serverUri=''] {}
2019-04-25 17:21:20,352 - call returned (1, "Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000707000000, 130023424, 0) failed; error='Cannot allocate memory' (errno=12)")
2019-04-25 17:21:20,365 - Will retry 5 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
2019-04-25 17:21:30,378 - Process with pid 66871 is not running. Stale pid file at /var/run/hive/hive-server.pid

Command failed after 1 tries


cat /var/log/hive/hiveserver2.log | grep ERROR

2019-04-25T17:21:22,938 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename2019-04-25T17:21:24,411 ERROR [main]: metrics2.CodahaleMetrics (:()) - Unable to instantiate using constructor(MetricRegistry, HiveConf) for reporter org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter from conf HIVE_CODAHALE_METRICS_REPORTER_CLASSES2019-04-25T17:21:24,438 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics221400335606640996json to /tmp/report.json2019-04-25T17:21:24,439 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename2019-04-25T17:21:25,454 ERROR [main]: server.HiveServer2 (HiveServer2.java:start(740)) - Error starting Web UI:2019-04-25T17:21:25,611 ERROR [main]: server.HiveServer2 (HiveServer2.java:execute(1343)) - Error starting HiveServer22019-04-25T17:21:26,105 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics5682491866228312214json to /tmp/report.json2019-04-25T17:21:26,105 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename2019-04-25T17:21:26,904 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics6263799357756058898json to /tmp/report.json2019-04-25T17:21:26,904 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename

cat /var/log/hive/hiveserver2.log | grep WARN

2019-04-25T17:18:21,322 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(1100)) - Error starting HiveServer2 on attempt 2, will retry in 60000ms
2019-04-25T17:18:21,385 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:18:21,390 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:18:21,480 WARN  [PrivilegeSynchronizer]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (1 of 24) after 5s. refresh_privileges
2019-04-25T17:19:21,670 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:19:21,671 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:19:21,687 WARN  [main]: impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - hiveserver2 metrics system already initialized!
2019-04-25T17:19:21,688 WARN  [main]: server.HiveServer2 (HiveServer2.java:init(209)) - Could not initiate the HiveServer2 Metrics system.  Metrics may not be reported.
2019-04-25T17:19:21,798 WARN  [main]: session.SessionState (:()) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
2019-04-25T17:19:21,941 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:19:21,942 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:19:22,309 WARN  [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(652)) - This HiveServer2 instance is now de-registered from ZooKeeper. The server will be shut down after the last client session completes.
2019-04-25T17:19:22,310 WARN  [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(660)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2019-04-25T17:19:22,386 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:19:22,386 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:19:22,398 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(1100)) - Error starting HiveServer2 on attempt 3, will retry in 60000ms
2019-04-25T17:19:22,461 WARN  [HiveMaterializedViewsRegistry-0]: session.SessionState (:()) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
2019-04-25T17:19:22,621 WARN  [PrivilegeSynchronizer]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (1 of 24) after 5s. refresh_privileges
2019-04-25T17:20:22,835 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:20:22,836 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:20:22,845 WARN  [main]: impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - hiveserver2 metrics system already initialized!
2019-04-25T17:20:22,846 WARN  [main]: server.HiveServer2 (HiveServer2.java:init(209)) - Could not initiate the HiveServer2 Metrics system.  Metrics may not be reported.
2019-04-25T17:20:22,974 WARN  [main]: session.SessionState (:()) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
2019-04-25T17:20:23,172 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:20:23,172 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:20:23,611 WARN  [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(652)) - This HiveServer2 instance is now de-registered from ZooKeeper. The server will be shut down after the last client session completes.
2019-04-25T17:20:23,611 WARN  [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(660)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2019-04-25T17:20:23,632 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(1100)) - Error starting HiveServer2 on attempt 4, will retry in 60000ms
2019-04-25T17:20:23,641 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:20:23,642 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:20:23,773 WARN  [PrivilegeSynchronizer]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (1 of 24) after 5s. refresh_privileges
2019-04-25T17:20:24,065 WARN  [HiveMaterializedViewsRegistry-0]: session.SessionState (:()) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
2019-04-25T17:21:24,392 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:21:24,393 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:21:24,410 WARN  [main]: impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - hiveserver2 metrics system already initialized!
2019-04-25T17:21:24,411 WARN  [main]: server.HiveServer2 (HiveServer2.java:init(209)) - Could not initiate the HiveServer2 Metrics system.  Metrics may not be reported.
2019-04-25T17:21:24,642 WARN  [main]: session.SessionState (:()) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
2019-04-25T17:21:24,980 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:21:24,981 WARN  [main]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:21:25,563 WARN  [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(652)) - This HiveServer2 instance is now de-registered from ZooKeeper. The server will be shut down after the last client session completes.
2019-04-25T17:21:25,563 WARN  [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(660)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2019-04-25T17:21:25,604 WARN  [HiveMaterializedViewsRegistry-0]: session.SessionState (:()) - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
2019-04-25T17:21:25,718 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.stats.fetch.partition.stats does not exist
2019-04-25T17:21:25,718 WARN  [PrivilegeSynchronizer]: conf.HiveConf (HiveConf.java:initialize(5310)) - HiveConf of name hive.heapsize does not exist
2019-04-25T17:21:25,866 WARN  [PrivilegeSynchronizer]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (1 of 24) after 5s. refresh_privileges
1 ACCEPTED SOLUTION

Accepted Solutions

Re: Failed to start HiveServer2 using Ambari 2.7 and HDP 3.1

New Contributor

My problem was although hive memory was configured correctly but can not allocated . (because node doesnt have sufficent memmory) so by increasing node memmory ,It is solved

View solution in original post

2 REPLIES 2

Re: Failed to start HiveServer2 using Ambari 2.7 and HDP 3.1

Mentor

@Yegane Ahmadnejad

1. Don't manually set the java_home if you are on RHEL/Centos


tar xzf jdk-8u171-linux-x64.tar.gz
cd /opt/jdk1.8.0_171/
alternatives --install /usr/bin/java java /opt/jdk1.8.0_171/bin/java 2
alternatives --config java
alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_171/bin/jar 2
alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_171/bin/javac 2
alternatives --set jar /opt/jdk1.8.0_171/bin/jar
alternatives --set javac /opt/jdk1.8.0_171/bin/javac

2. Don't change the warehouse root directory

3. Don't create the hiveserver2 znode manually.

4. Didn't see the hive database setup step??

Re: Failed to start HiveServer2 using Ambari 2.7 and HDP 3.1

New Contributor

My problem was although hive memory was configured correctly but can not allocated . (because node doesnt have sufficent memmory) so by increasing node memmory ,It is solved

View solution in original post