Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HiveServer2 failed to start

avatar
Explorer

hiveserver2.log.txt@Jay Kumar SenSharma @Josh Elser

@Ravi Mutyala @Vinicius Higa Murakami

@Geoffrey Shelton Okot

Hi, I am installing a single node Hadoop cluster on AWS EC2 instance. All components can be started except HiveServer2. I have looked at other threads and tried to resolve this issue, but no luck yet. I have tried these two config changes, but still failed. Please review the attached hiveserver2.log and also the stderr/stdout below. I really appreciate any feedback/suggestion. Thank you.
Added port 3306 for Database URL: jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true
Updated hive.server2.webui.port from 10002 to 10202

stderr: /var/lib/ambari-agent/data/errors-2248.txt

Traceback (most recent call last):  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py", line 143, in <module>    HiveServer().execute()  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute    method(env)  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1006, in restart    self.start(env, upgrade_type=upgrade_type)  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py", line 53, in start    hive_service('hiveserver2', action = 'start', upgrade_type=upgrade_type)  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 101, in hive_service    wait_for_znode()  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/decorator.py", line 54, in wrapper    return function(*args, **kwargs)  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_service.py", line 184, in wait_for_znode    raise Exception(format("HiveServer2 is no longer running, check the logs at {hive_log_dir}")) Exception: HiveServer2 is no longer running, check the logs at /var/log/hive

stdout: /var/lib/ambari-agent/data/output-2248.txt

2019-04-11 07:33:19,031 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:19,044 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-04-11 07:33:19,176 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:19,183 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-04-11 07:33:19,184 - Group['kms'] {} 2019-04-11 07:33:19,185 - Group['livy'] {} 2019-04-11 07:33:19,185 - Group['spark'] {} 2019-04-11 07:33:19,185 - Group['ranger'] {} 2019-04-11 07:33:19,186 - Group['hdfs'] {} 2019-04-11 07:33:19,186 - Group['zeppelin'] {} 2019-04-11 07:33:19,186 - Group['hadoop'] {} 2019-04-11 07:33:19,186 - Group['users'] {} 2019-04-11 07:33:19,186 - Group['knox'] {} 2019-04-11 07:33:19,187 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,188 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,190 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,191 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,192 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,193 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,194 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None} 2019-04-11 07:33:19,195 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None} 2019-04-11 07:33:19,196 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,197 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2019-04-11 07:33:19,198 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,199 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,200 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,201 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-04-11 07:33:19,203 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-04-11 07:33:19,204 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None} 2019-04-11 07:33:19,205 - User['logsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,206 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2019-04-11 07:33:19,207 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,208 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-04-11 07:33:19,209 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,210 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2019-04-11 07:33:19,211 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,211 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,212 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-04-11 07:33:19,213 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None} 2019-04-11 07:33:19,213 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-11 07:33:19,214 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2019-04-11 07:33:19,218 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2019-04-11 07:33:19,219 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2019-04-11 07:33:19,219 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-11 07:33:19,220 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-11 07:33:19,220 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2019-04-11 07:33:19,226 - call returned (0, '1011') 2019-04-11 07:33:19,226 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1011'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2019-04-11 07:33:19,230 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1011'] due to not_if 2019-04-11 07:33:19,230 - Group['hdfs'] {} 2019-04-11 07:33:19,231 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2019-04-11 07:33:19,231 - FS Type: HDFS 2019-04-11 07:33:19,231 - Directory['/etc/hadoop'] {'mode': 0755} 2019-04-11 07:33:19,241 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-04-11 07:33:19,242 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2019-04-11 07:33:19,253 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2019-04-11 07:33:19,257 - Skipping Execute[('setenforce', '0')] due to not_if 2019-04-11 07:33:19,257 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2019-04-11 07:33:19,258 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2019-04-11 07:33:19,258 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2019-04-11 07:33:19,259 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2019-04-11 07:33:19,261 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2019-04-11 07:33:19,262 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2019-04-11 07:33:19,266 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:19,273 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-04-11 07:33:19,273 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2019-04-11 07:33:19,274 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2019-04-11 07:33:19,276 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:19,279 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2019-04-11 07:33:19,281 - Skipping unlimited key JCE policy check and setup since it is not required 2019-04-11 07:33:19,553 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-04-11 07:33:19,560 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20} 2019-04-11 07:33:19,577 - call returned (0, 'hive-server2 - 3.1.0.0-78') 2019-04-11 07:33:19,577 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:19,591 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://ip-172-31-18-160.ec2.internal:8080/resources/CredentialUtil.jar'), 'mode': 0755} 2019-04-11 07:33:19,592 - Not downloading the file from http://ip-172-31-18-160.ec2.internal:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists 2019-04-11 07:33:20,172 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive-server.pid 1>/tmp/tmpaXU8zt 2>/tmp/tmpRT1oNg''] {'quiet': False} 2019-04-11 07:33:20,197 - call returned (0, '') 2019-04-11 07:33:20,197 - get_user_call_output returned (0, u'32168', u'') 2019-04-11 07:33:20,198 - Execute['ambari-sudo.sh kill 32168'] {'not_if': '! (ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p 32168 >/dev/null 2>&1)'} 2019-04-11 07:33:20,213 - Skipping Execute['ambari-sudo.sh kill 32168'] due to not_if 2019-04-11 07:33:20,213 - Execute['ambari-sudo.sh kill -9 32168'] {'not_if': '! (ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p 32168 >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p 32168 >/dev/null 2>&1) )', 'ignore_failures': True} 2019-04-11 07:33:20,227 - Skipping Execute['ambari-sudo.sh kill -9 32168'] due to not_if 2019-04-11 07:33:20,227 - Execute['! (ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p 32168 >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 3} 2019-04-11 07:33:20,241 - File['/var/run/hive/hive-server.pid'] {'action': ['delete']} 2019-04-11 07:33:20,242 - Deleting File['/var/run/hive/hive-server.pid'] 2019-04-11 07:33:20,242 - Pid file /var/run/hive/hive-server.pid is empty or does not exist 2019-04-11 07:33:20,244 - Directories to fill with configs: [u'/usr/hdp/current/hive-server2/conf', u'/usr/hdp/current/hive-server2/conf/'] 2019-04-11 07:33:20,245 - Directory['/etc/hive/3.1.0.0-78/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755} 2019-04-11 07:33:20,245 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2019-04-11 07:33:20,252 - Generating config: /etc/hive/3.1.0.0-78/0/mapred-site.xml 2019-04-11 07:33:20,253 - File['/etc/hive/3.1.0.0-78/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-04-11 07:33:20,281 - File['/etc/hive/3.1.0.0-78/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,281 - File['/etc/hive/3.1.0.0-78/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755} 2019-04-11 07:33:20,283 - File['/etc/hive/3.1.0.0-78/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,285 - File['/etc/hive/3.1.0.0-78/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,287 - File['/etc/hive/3.1.0.0-78/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,289 - File['/etc/hive/3.1.0.0-78/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,290 - File['/etc/hive/3.1.0.0-78/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,290 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://ip-172-31-18-160.ec2.internal:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}} 2019-04-11 07:33:20,296 - Generating config: /etc/hive/3.1.0.0-78/0/beeline-site.xml 2019-04-11 07:33:20,296 - File['/etc/hive/3.1.0.0-78/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-04-11 07:33:20,297 - File['/etc/hive/3.1.0.0-78/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,298 - Directory['/etc/hive/3.1.0.0-78/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755} 2019-04-11 07:33:20,298 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2019-04-11 07:33:20,303 - Generating config: /etc/hive/3.1.0.0-78/0/mapred-site.xml 2019-04-11 07:33:20,303 - File['/etc/hive/3.1.0.0-78/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-04-11 07:33:20,344 - File['/etc/hive/3.1.0.0-78/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,344 - File['/etc/hive/3.1.0.0-78/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755} 2019-04-11 07:33:20,348 - File['/etc/hive/3.1.0.0-78/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,351 - File['/etc/hive/3.1.0.0-78/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,354 - File['/etc/hive/3.1.0.0-78/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,357 - File['/etc/hive/3.1.0.0-78/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,359 - File['/etc/hive/3.1.0.0-78/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,359 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://ip-172-31-18-160.ec2.internal:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}} 2019-04-11 07:33:20,368 - Generating config: /etc/hive/3.1.0.0-78/0/beeline-site.xml 2019-04-11 07:33:20,368 - File['/etc/hive/3.1.0.0-78/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-04-11 07:33:20,370 - File['/etc/hive/3.1.0.0-78/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:20,371 - File['/usr/hdp/current/hive-server2/conf/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_server/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640} 2019-04-11 07:33:20,371 - Writing File['/usr/hdp/current/hive-server2/conf/hive-site.jceks'] because contents don't match 2019-04-11 07:33:20,372 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf/', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...} 2019-04-11 07:33:20,380 - Generating config: /usr/hdp/current/hive-server2/conf/hive-site.xml 2019-04-11 07:33:20,380 - File['/usr/hdp/current/hive-server2/conf/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-04-11 07:33:20,495 - Writing File['/usr/hdp/current/hive-server2/conf/hive-site.xml'] because contents don't match 2019-04-11 07:33:20,496 - Generating Atlas Hook config file /usr/hdp/current/hive-server2/conf/atlas-application.properties 2019-04-11 07:33:20,496 - PropertiesFile['/usr/hdp/current/hive-server2/conf/atlas-application.properties'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'properties': ...} 2019-04-11 07:33:20,498 - Generating properties file: /usr/hdp/current/hive-server2/conf/atlas-application.properties 2019-04-11 07:33:20,498 - File['/usr/hdp/current/hive-server2/conf/atlas-application.properties'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-04-11 07:33:20,508 - Writing File['/usr/hdp/current/hive-server2/conf/atlas-application.properties'] because contents don't match 2019-04-11 07:33:20,511 - File['/usr/hdp/current/hive-server2/conf//hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0755} 2019-04-11 07:33:20,512 - Writing File['/usr/hdp/current/hive-server2/conf//hive-env.sh'] because contents don't match 2019-04-11 07:33:20,512 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'} 2019-04-11 07:33:20,513 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2019-04-11 07:33:20,514 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://ip-172-31-18-160.ec2.internal:8080/resources/DBConnectionVerification.jar'), 'mode': 0644} 2019-04-11 07:33:20,514 - Not downloading the file from http://ip-172-31-18-160.ec2.internal:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists 2019-04-11 07:33:20,514 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2019-04-11 07:33:20,515 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2019-04-11 07:33:20,515 - Directory['/var/lib/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2019-04-11 07:33:20,516 - File['/var/lib/ambari-agent/tmp/start_hiveserver2_script'] {'content': Template('startHiveserver2.sh.j2'), 'mode': 0755} 2019-04-11 07:33:20,519 - File['/usr/hdp/current/hive-server2/conf/hadoop-metrics2-hiveserver2.properties'] {'content': Template('hadoop-metrics2-hiveserver2.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2019-04-11 07:33:20,520 - XmlConfig['hiveserver2-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf/', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2019-04-11 07:33:20,524 - Generating config: /usr/hdp/current/hive-server2/conf/hiveserver2-site.xml 2019-04-11 07:33:20,525 - File['/usr/hdp/current/hive-server2/conf/hiveserver2-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'} 2019-04-11 07:33:20,531 - Called copy_to_hdfs tarball: mapreduce 2019-04-11 07:33:20,531 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:20,531 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:20,531 - Source file: /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz 2019-04-11 07:33:20,531 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:20,531 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:20,532 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2019-04-11 07:33:20,533 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmprXroOP 2>/tmp/tmpvqsdPn''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:20,562 - call returned (0, '') 2019-04-11 07:33:20,563 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16425,"group":"hdfs","length":0,"modificationTime":1554360640209,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:20,563 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2019-04-11 07:33:20,564 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpL6_6BI 2>/tmp/tmpKWaqB8''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:20,591 - call returned (0, '') 2019-04-11 07:33:20,591 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1554336894103,"blockSize":134217728,"childrenNum":0,"fileId":16426,"group":"hadoop","length":309320206,"modificationTime":1554336894825,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'') 2019-04-11 07:33:20,592 - DFS file /hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz is identical to /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz, skipping the copying 2019-04-11 07:33:20,592 - Will attempt to copy mapreduce tarball from /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz to DFS at /hdp/apps/3.1.0.0-78/mapreduce/mapreduce.tar.gz. 2019-04-11 07:33:20,592 - Called copy_to_hdfs tarball: tez 2019-04-11 07:33:20,592 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:20,592 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:20,592 - Source file: /usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/tez/tez.tar.gz 2019-04-11 07:33:20,592 - Preparing the Tez tarball... 2019-04-11 07:33:20,592 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:20,592 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:20,592 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:20,592 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:20,593 - Extracting /usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz to /var/lib/ambari-agent/tmp/mapreduce-tarball-Ikgfzz 2019-04-11 07:33:20,593 - Execute[('tar', '-xf', u'/usr/hdp/3.1.0.0-78/hadoop/mapreduce.tar.gz', '-C', '/var/lib/ambari-agent/tmp/mapreduce-tarball-Ikgfzz/')] {'tries': 3, 'sudo': True, 'try_sleep': 1} 2019-04-11 07:33:23,835 - Extracting /usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz to /var/lib/ambari-agent/tmp/tez-tarball-k8_qHE 2019-04-11 07:33:23,836 - Execute[('tar', '-xf', u'/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz', '-C', '/var/lib/ambari-agent/tmp/tez-tarball-k8_qHE/')] {'tries': 3, 'sudo': True, 'try_sleep': 1} 2019-04-11 07:33:25,711 - Execute[('cp', '-a', '/var/lib/ambari-agent/tmp/mapreduce-tarball-Ikgfzz/hadoop/lib/native', '/var/lib/ambari-agent/tmp/tez-tarball-k8_qHE/lib')] {'sudo': True} 2019-04-11 07:33:25,733 - Directory['/var/lib/ambari-agent/tmp/tez-tarball-k8_qHE/lib'] {'recursive_ownership': True, 'mode': 0755, 'cd_access': 'a'} 2019-04-11 07:33:25,734 - Creating a new Tez tarball at /var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz 2019-04-11 07:33:25,734 - Execute[('tar', '-zchf', '/tmp/tmpqELmQB', '-C', '/var/lib/ambari-agent/tmp/tez-tarball-k8_qHE', '.')] {'tries': 3, 'sudo': True, 'try_sleep': 1} 2019-04-11 07:33:36,613 - Execute[('mv', '/tmp/tmpqELmQB', '/var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz')] {} 2019-04-11 07:33:36,749 - HdfsResource['/hdp/apps/3.1.0.0-78/tez'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2019-04-11 07:33:36,751 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/tez?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpNEjxck 2>/tmp/tmpn8YbwX''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,779 - call returned (0, '') 2019-04-11 07:33:36,780 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16427,"group":"hdfs","length":0,"modificationTime":1554336913432,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:36,781 - HdfsResource['/hdp/apps/3.1.0.0-78/tez/tez.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2019-04-11 07:33:36,782 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/tez/tez.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpmhApYd 2>/tmp/tmpfLkrGV''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,809 - call returned (0, '') 2019-04-11 07:33:36,809 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1554336913432,"blockSize":134217728,"childrenNum":0,"fileId":16428,"group":"hadoop","length":255717070,"modificationTime":1554336913880,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'') 2019-04-11 07:33:36,810 - Not replacing existing DFS file /hdp/apps/3.1.0.0-78/tez/tez.tar.gz which is different from /var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz, due to replace_existing_files=False 2019-04-11 07:33:36,810 - Will attempt to copy tez tarball from /var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz to DFS at /hdp/apps/3.1.0.0-78/tez/tez.tar.gz. 2019-04-11 07:33:36,810 - Called copy_to_hdfs tarball: pig 2019-04-11 07:33:36,811 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:36,811 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:36,811 - Source file: /usr/hdp/3.1.0.0-78/pig/pig.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/pig/pig.tar.gz 2019-04-11 07:33:36,811 - HdfsResource['/hdp/apps/3.1.0.0-78/pig'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2019-04-11 07:33:36,812 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/pig?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp6PiqJr 2>/tmp/tmpNwobnl''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,840 - call returned (0, '') 2019-04-11 07:33:36,840 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17525,"group":"hdfs","length":0,"modificationTime":1554360635310,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:36,841 - HdfsResource['/hdp/apps/3.1.0.0-78/pig/pig.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/pig/pig.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2019-04-11 07:33:36,842 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/pig/pig.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpOQ4o9H 2>/tmp/tmpu3n7Wp''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,870 - call returned (0, '') 2019-04-11 07:33:36,870 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1554360635310,"blockSize":134217728,"childrenNum":0,"fileId":17526,"group":"hadoop","length":159329685,"modificationTime":1554360635593,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'') 2019-04-11 07:33:36,871 - DFS file /hdp/apps/3.1.0.0-78/pig/pig.tar.gz is identical to /usr/hdp/3.1.0.0-78/pig/pig.tar.gz, skipping the copying 2019-04-11 07:33:36,871 - Will attempt to copy pig tarball from /usr/hdp/3.1.0.0-78/pig/pig.tar.gz to DFS at /hdp/apps/3.1.0.0-78/pig/pig.tar.gz. 2019-04-11 07:33:36,871 - Called copy_to_hdfs tarball: hive 2019-04-11 07:33:36,871 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:36,871 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:36,871 - Source file: /usr/hdp/3.1.0.0-78/hive/hive.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/hive/hive.tar.gz 2019-04-11 07:33:36,872 - HdfsResource['/hdp/apps/3.1.0.0-78/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2019-04-11 07:33:36,872 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpPtkIy5 2>/tmp/tmpvmVx4g''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,900 - call returned (0, '') 2019-04-11 07:33:36,900 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17527,"group":"hdfs","length":0,"modificationTime":1554360638511,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:36,901 - HdfsResource['/hdp/apps/3.1.0.0-78/hive/hive.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/hive/hive.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2019-04-11 07:33:36,902 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/hive/hive.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmprfp3gt 2>/tmp/tmppX4B1F''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,929 - call returned (0, '') 2019-04-11 07:33:36,930 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1554360638511,"blockSize":134217728,"childrenNum":0,"fileId":17528,"group":"hadoop","length":363292359,"modificationTime":1554360639100,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'') 2019-04-11 07:33:36,930 - DFS file /hdp/apps/3.1.0.0-78/hive/hive.tar.gz is identical to /usr/hdp/3.1.0.0-78/hive/hive.tar.gz, skipping the copying 2019-04-11 07:33:36,930 - Will attempt to copy hive tarball from /usr/hdp/3.1.0.0-78/hive/hive.tar.gz to DFS at /hdp/apps/3.1.0.0-78/hive/hive.tar.gz. 2019-04-11 07:33:36,930 - Called copy_to_hdfs tarball: sqoop 2019-04-11 07:33:36,931 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:36,931 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:36,931 - Source file: /usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz , Dest file in HDFS: /hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz 2019-04-11 07:33:36,931 - HdfsResource['/hdp/apps/3.1.0.0-78/sqoop'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2019-04-11 07:33:36,932 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/sqoop?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpl1EUFn 2>/tmp/tmp1VRSbE''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,959 - call returned (0, '') 2019-04-11 07:33:36,960 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17529,"group":"hdfs","length":0,"modificationTime":1554360639912,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:36,961 - HdfsResource['/hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2019-04-11 07:33:36,961 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmprDzLdo 2>/tmp/tmp0gA8MN''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:36,989 - call returned (0, '') 2019-04-11 07:33:36,989 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1554360639912,"blockSize":134217728,"childrenNum":0,"fileId":17530,"group":"hadoop","length":77257805,"modificationTime":1554360640042,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'') 2019-04-11 07:33:36,990 - DFS file /hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz is identical to /usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz, skipping the copying 2019-04-11 07:33:36,990 - Will attempt to copy sqoop tarball from /usr/hdp/3.1.0.0-78/sqoop/sqoop.tar.gz to DFS at /hdp/apps/3.1.0.0-78/sqoop/sqoop.tar.gz. 2019-04-11 07:33:36,990 - Called copy_to_hdfs tarball: hadoop_streaming 2019-04-11 07:33:36,990 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-04-11 07:33:36,990 - Tarball version was calcuated as 3.1.0.0-78. Use Command Version: True 2019-04-11 07:33:36,990 - Source file: /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar , Dest file in HDFS: /hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar 2019-04-11 07:33:36,990 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2019-04-11 07:33:36,991 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpIgVr2s 2>/tmp/tmpPw7kpI''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,020 - call returned (0, '') 2019-04-11 07:33:37,020 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16425,"group":"hdfs","length":0,"modificationTime":1554360640209,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,021 - HdfsResource['/hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2019-04-11 07:33:37,021 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmphgS9Tq 2>/tmp/tmppels18''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,049 - call returned (0, '') 2019-04-11 07:33:37,050 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":1554360640209,"blockSize":134217728,"childrenNum":0,"fileId":17531,"group":"hadoop","length":176342,"modificationTime":1554360640219,"owner":"hdfs","pathSuffix":"","permission":"444","replication":3,"storagePolicy":0,"type":"FILE"}}200', u'') 2019-04-11 07:33:37,050 - DFS file /hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar is identical to /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar, skipping the copying 2019-04-11 07:33:37,050 - Will attempt to copy hadoop_streaming tarball from /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-streaming.jar to DFS at /hdp/apps/3.1.0.0-78/mapreduce/hadoop-streaming.jar. 2019-04-11 07:33:37,051 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01755} 2019-04-11 07:33:37,051 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp3Hxs8M 2>/tmp/tmpUBepvY''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,078 - call returned (0, '') 2019-04-11 07:33:37,079 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":46,"fileId":17382,"group":"hadoop","length":0,"modificationTime":1554360640673,"owner":"hive","pathSuffix":"","permission":"1755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,079 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/query_data/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777} 2019-04-11 07:33:37,080 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/query_data/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpZ2BEBf 2>/tmp/tmpE5TzII''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,107 - call returned (0, '') 2019-04-11 07:33:37,107 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":17383,"group":"hadoop","length":0,"modificationTime":1554337093672,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,108 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/dag_meta'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777} 2019-04-11 07:33:37,109 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/dag_meta?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpqmlwPl 2>/tmp/tmp1PFGox''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,136 - call returned (0, '') 2019-04-11 07:33:37,136 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":17532,"group":"hadoop","length":0,"modificationTime":1554360640439,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,137 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/dag_data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777} 2019-04-11 07:33:37,137 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/dag_data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpixuhW7 2>/tmp/tmpgULlCg''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,164 - call returned (0, '') 2019-04-11 07:33:37,165 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":17533,"group":"hadoop","length":0,"modificationTime":1554360640556,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,165 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/app_data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777} 2019-04-11 07:33:37,166 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/app_data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp5oy1T8 2>/tmp/tmpsL9Vo3''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,193 - call returned (0, '') 2019-04-11 07:33:37,194 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":5,"fileId":17534,"group":"hadoop","length":0,"modificationTime":1554948644290,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,194 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2019-04-11 07:33:37,197 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-04-11 07:33:37,197 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json 2019-04-11 07:33:37,197 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json'] {'content': Template('input.config-hive.json.j2'), 'mode': 0644} 2019-04-11 07:33:37,198 - Hive: Setup ranger: command retry not enabled thus skipping if ranger admin is down ! 2019-04-11 07:33:37,198 - HdfsResource['/ranger/audit'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'recursive_chmod': True, 'owner': 'hdfs', 'group': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0755} 2019-04-11 07:33:37,198 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/ranger/audit?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpEAoFKw 2>/tmp/tmpOLEUxH''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,230 - call returned (0, '') 2019-04-11 07:33:37,230 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":7,"fileId":16390,"group":"hdfs","length":0,"modificationTime":1554362249033,"owner":"hdfs","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,231 - HdfsResource['/ranger/audit/hiveServer2'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'recursive_chmod': True, 'owner': 'hive', 'group': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0700} 2019-04-11 07:33:37,232 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://ip-172-31-18-160.ec2.internal:50070/webhdfs/v1/ranger/audit/hiveServer2?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpNQWoUI 2>/tmp/tmpn288Qf''] {'logoutput': None, 'quiet': False} 2019-04-11 07:33:37,269 - call returned (0, '') 2019-04-11 07:33:37,269 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":17535,"group":"hive","length":0,"modificationTime":1554360640825,"owner":"hive","pathSuffix":"","permission":"700","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-04-11 07:33:37,269 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://ip-172-31-18-160.ec2.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2019-04-11 07:33:37,270 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20} 2019-04-11 07:33:37,287 - call returned (0, 'hive-server2 - 3.1.0.0-78') 2019-04-11 07:33:37,293 - Skipping Ranger API calls, as policy cache file exists for hive 2019-04-11 07:33:37,293 - If service name for hive is not created on Ranger Admin, then to re-create it delete policy cache file: /etc/ranger/RxProfiler_hive/policycache/hiveServer2_RxProfiler_hive.json 2019-04-11 07:33:37,294 - File['/usr/hdp/current/hive-server2/conf//ranger-security.xml'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:37,294 - Writing File['/usr/hdp/current/hive-server2/conf//ranger-security.xml'] because contents don't match 2019-04-11 07:33:37,295 - Directory['/etc/ranger/RxProfiler_hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2019-04-11 07:33:37,295 - Directory['/etc/ranger/RxProfiler_hive/policycache'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2019-04-11 07:33:37,295 - File['/etc/ranger/RxProfiler_hive/policycache/hiveServer2_RxProfiler_hive.json'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2019-04-11 07:33:37,296 - XmlConfig['ranger-hive-audit.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf/', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2019-04-11 07:33:37,301 - Generating config: /usr/hdp/current/hive-server2/conf/ranger-hive-audit.xml 2019-04-11 07:33:37,301 - File['/usr/hdp/current/hive-server2/conf/ranger-hive-audit.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'} 2019-04-11 07:33:37,307 - XmlConfig['ranger-hive-security.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf/', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2019-04-11 07:33:37,312 - Generating config: /usr/hdp/current/hive-server2/conf/ranger-hive-security.xml 2019-04-11 07:33:37,312 - File['/usr/hdp/current/hive-server2/conf/ranger-hive-security.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'} 2019-04-11 07:33:37,317 - XmlConfig['ranger-policymgr-ssl.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf/', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2019-04-11 07:33:37,322 - Generating config: /usr/hdp/current/hive-server2/conf/ranger-policymgr-ssl.xml 2019-04-11 07:33:37,322 - File['/usr/hdp/current/hive-server2/conf/ranger-policymgr-ssl.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'} 2019-04-11 07:33:37,326 - Execute[(u'/usr/hdp/3.1.0.0-78/ranger-hive-plugin/ranger_credential_helper.py', '-l', u'/usr/hdp/3.1.0.0-78/ranger-hive-plugin/install/lib/*', '-f', '/etc/ranger/RxProfiler_hive/cred.jceks', '-k', 'sslKeyStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}, 'sudo': True} Using Java:/usr/jdk64/jdk1.8.0_112/bin/java Alias sslKeyStore created successfully! 2019-04-11 07:33:37,937 - Execute[(u'/usr/hdp/3.1.0.0-78/ranger-hive-plugin/ranger_credential_helper.py', '-l', u'/usr/hdp/3.1.0.0-78/ranger-hive-plugin/install/lib/*', '-f', '/etc/ranger/RxProfiler_hive/cred.jceks', '-k', 'sslTrustStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}, 'sudo': True} Using Java:/usr/jdk64/jdk1.8.0_112/bin/java Alias sslTrustStore created successfully! 2019-04-11 07:33:38,552 - File['/etc/ranger/RxProfiler_hive/cred.jceks'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0640} 2019-04-11 07:33:38,553 - File['/etc/ranger/RxProfiler_hive/.cred.jceks.crc'] {'owner': 'hive', 'only_if': 'test -e /etc/ranger/RxProfiler_hive/.cred.jceks.crc', 'group': 'hadoop', 'mode': 0640} 2019-04-11 07:33:38,556 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive-server.pid 1>/tmp/tmpljBS5M 2>/tmp/tmpsy8rjX''] {'quiet': False} 2019-04-11 07:33:38,572 - call returned (1, '') 2019-04-11 07:33:38,572 - Execution of 'cat /var/run/hive/hive-server.pid 1>/tmp/tmpljBS5M 2>/tmp/tmpsy8rjX' returned 1. cat: /var/run/hive/hive-server.pid: No such file or directory 
2019-04-11 07:33:38,572 - get_user_call_output returned (1, u'', u'cat: /var/run/hive/hive-server.pid: No such file or directory') 2019-04-11 07:33:38,573 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/hdp/current/hive-server2/conf/ --service metatool -listFSRoot' 2>/dev/null | grep hdfs:// | cut -f1,2,3 -d '/' | grep -v 'hdfs://ip-172-31-18-160.ec2.internal:8020' | head -1'] {} 2019-04-11 07:33:46,854 - call returned (0, '') 2019-04-11 07:33:46,855 - Execute['/var/lib/ambari-agent/tmp/start_hiveserver2_script /var/log/hive/hive-server2.out /var/log/hive/hive-server2.err /var/run/hive/hive-server.pid /usr/hdp/current/hive-server2/conf/ /etc/tez/conf'] {'environment': {'HIVE_BIN': 'hive', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112', 'HADOOP_HOME': u'/usr/hdp/current/hadoop-client'}, 'not_if': 'ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1', 'user': 'hive', 'path': [u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-server2/bin:/usr/hdp/3.1.0.0-78/hadoop/bin']} 2019-04-11 07:33:46,877 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/hive-server2/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true' hive [PROTECTED] com.mysql.jdbc.Driver'] {'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10} 2019-04-11 07:33:47,205 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:33:47,751 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:33:47,752 - Will retry 29 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:33:57,761 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:33:58,280 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:33:58,280 - Will retry 28 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:34:08,288 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:34:08,810 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:34:08,811 - Will retry 27 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:34:18,821 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:34:19,343 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:34:19,343 - Will retry 26 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:34:29,354 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:34:29,875 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:34:29,876 - Will retry 25 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:34:39,883 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:34:40,402 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:34:40,403 - Will retry 24 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:34:50,410 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:34:50,932 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:34:50,933 - Will retry 23 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:35:00,941 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:35:01,486 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:35:01,486 - Will retry 22 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:35:11,496 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:35:12,022 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:35:12,022 - Will retry 21 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:35:22,031 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:35:22,614 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:35:22,615 - Will retry 20 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:35:32,624 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:35:33,164 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:35:33,164 - Will retry 19 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:35:43,172 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:35:43,710 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:35:43,710 - Will retry 18 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:35:53,720 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:35:54,261 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:35:54,261 - Will retry 17 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:36:04,271 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:36:04,799 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:36:04,800 - Will retry 16 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:36:14,807 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:36:15,382 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:36:15,382 - Will retry 15 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:36:25,393 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:36:25,923 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:36:25,924 - Will retry 14 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:36:35,932 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:36:36,452 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:36:36,453 - Will retry 13 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:36:46,463 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:36:46,979 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:36:46,980 - Will retry 12 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:36:56,989 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:36:57,502 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:36:57,502 - Will retry 11 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:37:07,512 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:37:08,039 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:37:08,039 - Will retry 10 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:37:18,041 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:37:18,559 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:37:18,560 - Will retry 9 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:37:28,567 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:37:29,081 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:37:29,082 - Will retry 8 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:37:39,091 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:37:39,636 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:37:39,637 - Will retry 7 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:37:49,647 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server ip-172-31-18-160.ec2.internal:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2019-04-11 07:37:50,183 - call returned (1, 'Node does not exist: /hiveserver2') 2019-04-11 07:37:50,184 - Will retry 6 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) 2019-04-11 07:38:00,193 - Process with pid 8031 is not running. Stale pid file at /var/run/hive/hive-server.pid 
Command failed after 1 tries


9 REPLIES 9

avatar
Explorer
@Jay Kumar SenSharma@Vinicius Higa Murakami
@Josh Elser@Geoffrey Shelton Okot


Can someone take a look at this issue and advise please? Thank you very much.

avatar
Master Mentor

@Andy Sutan

I am assuming you are on Centos/RHEL. To resolve your issue lets walk through the steps below.

Unfortunately, you didn't attach the hiveserver2.log found in

/var/log/hive/hiveserver2.log

Here are the steps I want you to follow

1. Revert your hive.server2.webui.port back to 10002 from 10202.

2. Can you try to connect to your hive database in my example my hive password is hive

Make sure you have previously run the below command if not do it now

# yum install -y mysql-connector-java 
# ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar

Please validate the hive databases is available and accessible for user hive

# mysql -uhive -phive
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 37
Server version: 5.5.60-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| ambari             |
| druid              |
| hive               |
| mysql              |
| oozie              |
| performance_schema |
| ranger             |
| rangerkms          |
| superset           |
+--------------------+
10 rows in set (0.09 sec)

MariaDB [(none)]> use hive;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [hive]>


So all looks okay but if you don't see the hive database you can create it using the CLI see below or use the Ambari wizard, make sure you test successfully the connection before you proceed.


########################################################### 
# Create the hive user(hive/hive) and db as the root user 
# assuming root password here is {welcome1} 
########################################################## 
mysql -u root -pwelcome1 create database hive; 
create user 'hive'@'localhost' identified by 'hive'; 
grant all privileges on hive.* to 'hive'@'localhost'; 
grant all privileges on hive.* to 'hive'@'%'; 
grant all privileges on hive.* to 'hive'@'FQDN' identified by 'hive'; 
grant all privileges on hive.* to 'hive'@'localhost' with grant option; 
grant all privileges on hive.* to 'hive'@'FQDN' with grant option; 
grant all privileges on hive.* to 'hive'@'%' with grant option; 
flush privileges; quit;

3. There seems to be a problem with hiveserver creating a znode in zookeeper. [caught exception: ZooKeeper node /hiveserver2 is not ready yet]

#./usr/hdp/3.x.x.x/zookeeper/bin/zkCli.sh 
Welcome to ZooKeeper! 
.. --sample output---- 
...... 
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2 
[serverUri=FQDN:10000;version=3.1.0.3.1.0.0-78;sequence=0000000046] 
[zk: localhost:2181(CONNECTED) 1]

My entry above shows my hiveserver2 registered with the zookeeper.But I am sure you don't have an entry in your zookeeper

4. To access to your HDP host chose updating hostname to Public DNS/IP.

After the above restart, you cluster and should you encounter issues please send a detailed error stack

Hope that helps

avatar
Contributor

Hi @Shelton , I am facing the same issue as mentioned above while starting hiveserver2. I followed your debug steps and when I listed ls /hiveserver2 in Akali shell, I am getting below response

Welcome to ZooKeeper!
2022-02-02 23:11:02,741 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-02 23:11:02,834 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:58334, server: localhost/127.0.0.1:2181
2022-02-02 23:11:02,851 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc0088, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 1] 

which means I don't have entry hiveserver2 entry in my zookeeper. And when I debug hiveserver2.log under /var/log/hive folder I see below error as permission denied.

Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=hive, access=EXECUTE, inode="/tmp/hive"
        at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:457)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:604)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1858)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1876)

Please help me to resolve this if you have come across this issue anytime. Thanks

avatar
Contributor

@Shelton As you have mentioned solution in Point 3,

3. There seems to be a problem with hiveserver creating a znode in zookeeper. [caught exception: ZooKeeper node /hiveserver2 is not ready yet]

How can I create hiveserver2 znode instance in zookeeper if its not created?. 

avatar
Master Mentor

@Andy Sutan

Did you see my response? Can you respond, after going through my response and also attach the hiveserver2.log


avatar
Explorer

Thank you Geoffrey for the response. I will go through the response soon. Meanwhile, the hiveserver2.log is attached.

hiveserver2.log.txt

avatar
Master Mentor

@Andy Sutan

In the attached hiveserver2log I see you have an issue with the port

<<Caused by: java.net.BindException: Address already in use (Bind failed)>>

The offending port it's causing the failure of the HS2 to register with the zoookeeper !!

The default HS2 port is 10000 did you manually change to the current ports? Did you manually change the port?

<<Caused by: org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:2181.>>

Check offending application format netstat -nap |grep <port>

# netstat -nap |grep 10000 tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      1266/java

Kill the offender

# Kill -9 1266

10002 is the HiveServer2 web UI port and it should be freed up when HiveServer2 shuts down. The netstat output shows that some client is connected to your HiveServer2 UI port. You could try to figure out what client that may be and what it is doing since it is a bit unusual that a connection to the HiveServer2 UI would last very long. Finding out what client is running on that port may be a good thing.

After killing the process ID that is using the port and now restart your HS2 it should start successfully

HTH

avatar
Explorer

@Geoffrey Shelton Okot


hiveserver2.log.txt

Thank you Geoffrey for the feedbacks. I have reviewed and run through them below.


1. Revert your hive.server2.webui.port back to 10002 from 10202.
I have reverted back to the original port 10002. There were few default ports shared between Accumulo and Hiveserver2 components.
2. Can you try to connect to your hive database in my example my hive password is hive
I can connect to hive database inside the default mysql database. The output is listed below:


mysql> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| druid |

| hive |

| mysql |

| performance_schema |

| ranger |

| rangerkms |

| registry |

| streamline |

| superset |

| sys |

+--------------------+

11 rows in set (0.00 sec)

3. There seems to be a problem with hiveserver creating a znode in zookeeper.
This is the original issue with this new Hadoop environment. I am able to connect using zookeeper-client with the following output:

[zk: localhost:2181(CONNECTED) 0] ls /

[cluster, brokers, storm, zookeeper, infra-solr, hbase-unsecure, tracers, admin, isr_change_notification, log_dir_event_notification, logsearch, accumulo, controller_epoch, hiveserver2-leader, druid, rmstore, atsv2-hbase-unsecure, consumers, ambari-metrics-cluster, latest_producer_id_block, config]

4. To access to your HDP host chose updating hostname to Public DNS/IP. 4. To access to your HDP host chose updating hostname to Public DNS/IP.
This new single-node Hadoop environment is on AWS EC2 instance using Linux Ubuntu. So I have been using the private DNS as the hostname.

5. <<Caused by: java.net.BindException: Address already in use (Bind failed)>>

The offending port it's causing the failure of the HS2 to register with the zoookeeper !!

The default HS2 port is 10000 did you manually change to the current ports? Did you manually change the port?

<<Caused by: org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:2181.>>

I updated port 10002, but didn't touch 10000. I have removed the Accumulo processes that were using ports 10001 and 10002.

The port 2181 is being used by Zookeeper.


Based on the items above, I have restarted the Hiveserver2 component, but still failed with the attached hiveserver.log. Again, this new single-node Hadoop installation is on a single AWS EC2 instance using Linux Ubuntu.

Please let me know what the next step is. Thank you.

avatar
Master Mentor

@Andy Sutan

When your VM instance reboots it gets a new IP address. In fact, we even get a new hostname as the private IP address is baked into it. Change the hostname to match your Linux box the out of

$ hostname -f 

e.g myhost.com

Use the public IP

vi /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 [public IP]   myhost.com myhost 

Private IP addresses are not reachable over the Internet and can be used for communication between the instances in your VPC or data center. Public IP addresses are reachable over the Internet and can be used for communication between your instances and the Internet, or with other AWS services that have public endpoints.