Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Hive LLAP installation fails with unable to connect to Metastore Error

Highlighted

Hive LLAP installation fails with unable to connect to Metastore Error

Explorer

We have a two-node HDP 3.1.0 machines on which we are trying to install LLAP for hive.

 

Once we enable the service and save the config, Ambari does a bunch of operations and tries to start the service. It fails always with the below error.

 

 

 

2020-01-21 05:26:49,730 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2020-01-21 05:26:49,771 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2020-01-21 05:26:50,121 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2020-01-21 05:26:50,132 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2020-01-21 05:26:50,134 - Group['livy'] {}
2020-01-21 05:26:50,136 - Group['spark'] {}
2020-01-21 05:26:50,136 - Group['hdfs'] {}
2020-01-21 05:26:50,137 - Group['hadoop'] {}
2020-01-21 05:26:50,137 - Group['users'] {}
2020-01-21 05:26:50,138 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-01-21 05:26:50,140 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-01-21 05:26:50,141 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-01-21 05:26:50,143 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-01-21 05:26:50,144 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-01-21 05:26:50,146 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-01-21 05:26:50,147 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2020-01-21 05:26:50,149 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2020-01-21 05:26:50,150 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-01-21 05:26:50,151 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2020-01-21 05:26:50,153 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-01-21 05:26:50,154 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-01-21 05:26:50,155 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-01-21 05:26:50,157 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-01-21 05:26:50,159 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2020-01-21 05:26:50,170 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2020-01-21 05:26:50,170 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2020-01-21 05:26:50,171 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-01-21 05:26:50,173 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-01-21 05:26:50,175 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2020-01-21 05:26:50,191 - call returned (0, '1014')
2020-01-21 05:26:50,192 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1014'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2020-01-21 05:26:50,201 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1014'] due to not_if
2020-01-21 05:26:50,202 - Group['hdfs'] {}
2020-01-21 05:26:50,203 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2020-01-21 05:26:50,204 - FS Type: HDFS
2020-01-21 05:26:50,204 - Directory['/etc/hadoop'] {'mode': 0755}
2020-01-21 05:26:50,234 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2020-01-21 05:26:50,235 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2020-01-21 05:26:50,258 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2020-01-21 05:26:50,273 - Skipping Execute[('setenforce', '0')] due to not_if
2020-01-21 05:26:50,274 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2020-01-21 05:26:50,278 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2020-01-21 05:26:50,280 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2020-01-21 05:26:50,281 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2020-01-21 05:26:50,292 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2020-01-21 05:26:50,297 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2020-01-21 05:26:50,312 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:50,341 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2020-01-21 05:26:50,343 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2020-01-21 05:26:50,345 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2020-01-21 05:26:50,355 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:50,363 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2020-01-21 05:26:50,371 - Skipping unlimited key JCE policy check and setup since it is not required
2020-01-21 05:26:50,852 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2020-01-21 05:26:50,872 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2020-01-21 05:26:50,927 - call returned (0, 'hive-server2 - 3.1.0.0-78')
2020-01-21 05:26:50,928 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2020-01-21 05:26:50,971 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://xxxxxxx:8080/resources/CredentialUtil.jar'), 'mode': 0755}
2020-01-21 05:26:50,973 - Not downloading the file from http://xxxxxxx:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists
2020-01-21 05:26:52,112 - HdfsResource['/warehouse/tablespace/managed/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0700}
2020-01-21 05:26:52,118 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/warehouse/tablespace/managed/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpmcKrkX 2>/tmp/tmpnLy6xH''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:52,218 - call returned (0, '')
2020-01-21 05:26:52,218 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":3,"fileId":17306,"group":"hadoop","length":0,"modificationTime":1579244249550,"owner":"hive","pathSuffix":"","permission":"700","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:52,219 - Skipping the operation for not managed DFS directory /warehouse/tablespace/managed/hive since immutable_paths contains it.
2020-01-21 05:26:52,220 - HdfsResource['/user/hive/.yarn'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01755}
2020-01-21 05:26:52,222 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/user/hive/.yarn?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpW_iNr3 2>/tmp/tmpe9fOcS''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:52,326 - call returned (0, '')
2020-01-21 05:26:52,327 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":19712,"group":"hadoop","length":0,"modificationTime":1579246975600,"owner":"hive","pathSuffix":"","permission":"1755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:52,329 - HdfsResource['/user/hive/.yarn/package'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01755}
2020-01-21 05:26:52,332 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/user/hive/.yarn/package?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpjLjf6x 2>/tmp/tmp6RuMRL''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:52,439 - call returned (0, '')
2020-01-21 05:26:52,440 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":19713,"group":"hadoop","length":0,"modificationTime":1579245109665,"owner":"hive","pathSuffix":"","permission":"1755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:52,443 - HdfsResource['/user/hive/.yarn/package/LLAP'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01755}
2020-01-21 05:26:52,446 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/user/hive/.yarn/package/LLAP?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp_fluCi 2>/tmp/tmpFiePZi''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:52,557 - call returned (0, '')
2020-01-21 05:26:52,557 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":19714,"group":"hadoop","length":0,"modificationTime":1579246974334,"owner":"hive","pathSuffix":"","permission":"1755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:52,559 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01755}
2020-01-21 05:26:52,561 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpopJJNZ 2>/tmp/tmpACESdW''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:52,676 - call returned (0, '')
2020-01-21 05:26:52,677 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":46,"fileId":17337,"group":"hadoop","length":0,"modificationTime":1579243742943,"owner":"hive","pathSuffix":"","permission":"1755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:52,680 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/query_data/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2020-01-21 05:26:52,684 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/query_data/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpZxmmE5 2>/tmp/tmp2JqAVF''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:52,790 - call returned (0, '')
2020-01-21 05:26:52,791 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":4,"fileId":17338,"group":"hadoop","length":0,"modificationTime":1579478699945,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:52,794 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/dag_meta'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2020-01-21 05:26:52,797 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/dag_meta?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp6g57kg 2>/tmp/tmp54VvWm''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:52,907 - call returned (0, '')
2020-01-21 05:26:52,907 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":17409,"group":"hadoop","length":0,"modificationTime":1579244531944,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:52,909 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/dag_data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2020-01-21 05:26:52,912 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/dag_data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmps5qrAi 2>/tmp/tmp1zG33W''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:53,028 - call returned (0, '')
2020-01-21 05:26:53,028 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":17410,"group":"hadoop","length":0,"modificationTime":1579244393664,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:53,030 - HdfsResource['/warehouse/tablespace/external/hive/sys.db/app_data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2020-01-21 05:26:53,033 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/warehouse/tablespace/external/hive/sys.db/app_data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpCshlFa 2>/tmp/tmpvBnUXb''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:53,135 - call returned (0, '')
2020-01-21 05:26:53,135 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":3,"fileId":17411,"group":"hadoop","length":0,"modificationTime":1579583781985,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:53,138 - HdfsResource['/user/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0755}
2020-01-21 05:26:53,140 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://xxxxxxx:50070/webhdfs/v1/user/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpUSnyUa 2>/tmp/tmptXLbZJ''] {'logoutput': None, 'quiet': False}
2020-01-21 05:26:53,245 - call returned (0, '')
2020-01-21 05:26:53,246 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":17300,"group":"hdfs","length":0,"modificationTime":1579245109005,"owner":"hive","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2020-01-21 05:26:53,249 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://xxxxxxx:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}
2020-01-21 05:26:53,250 - Directories to fill with configs: [u'/usr/hdp/current/hive-server2/conf', u'/usr/hdp/current/hive-server2/conf_llap/']
2020-01-21 05:26:53,252 - Directory['/etc/hive/3.1.0.0-78/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}
2020-01-21 05:26:53,254 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2020-01-21 05:26:53,290 - Generating config: /etc/hive/3.1.0.0-78/0/mapred-site.xml
2020-01-21 05:26:53,291 - File['/etc/hive/3.1.0.0-78/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2020-01-21 05:26:53,424 - File['/etc/hive/3.1.0.0-78/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:53,425 - File['/etc/hive/3.1.0.0-78/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2020-01-21 05:26:53,432 - File['/etc/hive/3.1.0.0-78/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:53,438 - File['/etc/hive/3.1.0.0-78/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:53,445 - File['/etc/hive/3.1.0.0-78/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:53,449 - File['/etc/hive/3.1.0.0-78/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:53,453 - File['/etc/hive/3.1.0.0-78/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:53,455 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://xxxxxxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.llap': u'jdbc:hive2://xxxxxxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive', 'beeline.hs2.jdbc.url.default': u'container'}}
2020-01-21 05:26:53,474 - Generating config: /etc/hive/3.1.0.0-78/0/beeline-site.xml
2020-01-21 05:26:53,475 - File['/etc/hive/3.1.0.0-78/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2020-01-21 05:26:53,481 - File['/etc/hive/3.1.0.0-78/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2020-01-21 05:26:53,483 - Directory['/etc/hive_llap/conf'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0700}
2020-01-21 05:26:53,484 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive_llap/conf', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2020-01-21 05:26:53,500 - Generating config: /etc/hive_llap/conf/mapred-site.xml
2020-01-21 05:26:53,501 - File['/etc/hive_llap/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2020-01-21 05:26:53,578 - File['/etc/hive_llap/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:53,579 - File['/etc/hive_llap/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2020-01-21 05:26:53,585 - File['/etc/hive_llap/conf/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:53,589 - File['/etc/hive_llap/conf/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:53,594 - File['/etc/hive_llap/conf/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:53,597 - File['/etc/hive_llap/conf/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:53,601 - File['/etc/hive_llap/conf/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:53,603 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0600, 'conf_dir': '/etc/hive_llap/conf', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://xxxxxxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.llap': u'jdbc:hive2://xxxxxxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive', 'beeline.hs2.jdbc.url.default': u'container'}}
2020-01-21 05:26:53,619 - Generating config: /etc/hive_llap/conf/beeline-site.xml
2020-01-21 05:26:53,620 - File['/etc/hive_llap/conf/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2020-01-21 05:26:53,626 - File['/etc/hive_llap/conf/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:53,626 - Converted 'hive.llap.io.memory.size' value from '20480 MB' to '21474836480 Bytes' before writing it to config file.
2020-01-21 05:26:53,627 - Skipping setup for Atlas Hook, as it is disabled/ not supported.
2020-01-21 05:26:53,627 - No change done to Hive2/hive-site.xml 'hive.exec.post.hooks' value.
2020-01-21 05:26:53,627 - Retrieved 'tez/tez-site' for merging with 'tez_hive2/tez-interactive-site'.
2020-01-21 05:26:53,627 - XmlConfig['tez-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/tez_llap/conf', 'mode': 0664, 'configuration_attributes': {u'final': {u'tez.runtime.shuffle.ssl.enable': u'true'}}, 'owner': 'tez', 'configurations': ...}
2020-01-21 05:26:53,643 - Generating config: /etc/tez_llap/conf/tez-site.xml
2020-01-21 05:26:53,644 - File['/etc/tez_llap/conf/tez-site.xml'] {'owner': 'tez', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0664, 'encoding': 'UTF-8'}
2020-01-21 05:26:53,753 - Retrieved 'hiveserver2-site' for merging with 'hiveserver2-interactive-site'.
2020-01-21 05:26:53,753 - File['/usr/hdp/current/hive-server2/conf_llap/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_server_interactive/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640}
2020-01-21 05:26:53,754 - Writing File['/usr/hdp/current/hive-server2/conf_llap/hive-site.jceks'] because contents don't match
2020-01-21 05:26:53,756 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf_llap/', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2020-01-21 05:26:53,771 - Generating config: /usr/hdp/current/hive-server2/conf_llap/hive-site.xml
2020-01-21 05:26:53,771 - File['/usr/hdp/current/hive-server2/conf_llap/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2020-01-21 05:26:54,136 - XmlConfig['hiveserver2-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-server2/conf_llap/', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2020-01-21 05:26:54,152 - Generating config: /usr/hdp/current/hive-server2/conf_llap/hiveserver2-site.xml
2020-01-21 05:26:54,152 - File['/usr/hdp/current/hive-server2/conf_llap/hiveserver2-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2020-01-21 05:26:54,171 - File['/usr/hdp/current/hive-server2/conf_llap//hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2020-01-21 05:26:54,177 - File['/usr/hdp/current/hive-server2/conf_llap//llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,181 - File['/usr/hdp/current/hive-server2/conf_llap//llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,186 - File['/usr/hdp/current/hive-server2/conf_llap//hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,190 - File['/usr/hdp/current/hive-server2/conf_llap//hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,193 - File['/usr/hdp/current/hive-server2/conf_llap//beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,194 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0600, 'conf_dir': '/usr/hdp/current/hive-server2/conf_llap/', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://xxxxxxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.llap': u'jdbc:hive2://xxxxxxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive', 'beeline.hs2.jdbc.url.default': u'container'}}
2020-01-21 05:26:54,209 - Generating config: /usr/hdp/current/hive-server2/conf_llap/beeline-site.xml
2020-01-21 05:26:54,210 - File['/usr/hdp/current/hive-server2/conf_llap/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2020-01-21 05:26:54,225 - File['/usr/hdp/current/hive-server2/conf_llap/hadoop-metrics2-hiveserver2.properties'] {'content': Template('hadoop-metrics2-hiveserver2.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,234 - File['/usr/hdp/current/hive-server2/conf_llap//hadoop-metrics2-llapdaemon.properties'] {'content': Template('hadoop-metrics2-llapdaemon.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,243 - File['/usr/hdp/current/hive-server2/conf_llap//hadoop-metrics2-llaptaskscheduler.properties'] {'content': Template('hadoop-metrics2-llaptaskscheduler.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2020-01-21 05:26:54,245 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2020-01-21 05:26:54,248 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2020-01-21 05:26:54,249 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://xxxxxxx:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2020-01-21 05:26:54,249 - Not downloading the file from http://xxxxxxx:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2020-01-21 05:26:54,252 - File['/var/lib/ambari-agent/tmp/start_hiveserver2_interactive_script'] {'content': Template('startHiveserver2Interactive.sh.j2'), 'mode': 0755}
2020-01-21 05:26:54,253 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2020-01-21 05:26:54,254 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2020-01-21 05:26:54,255 - Directory['/var/lib/hive2'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2020-01-21 05:26:54,260 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2020-01-21 05:26:54,261 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json
2020-01-21 05:26:54,261 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json'] {'content': Template('input.config-hive.json.j2'), 'mode': 0644}
2020-01-21 05:26:54,266 - Determining previous run 'LLAP package' folder(s) to be deleted ....
2020-01-21 05:26:54,266 - Previous run 'LLAP package' folder(s) to be deleted = []
2020-01-21 05:26:54,266 - No 'llap-yarn-service*' folder deleted.
2020-01-21 05:26:54,266 - Starting LLAP
2020-01-21 05:26:54,267 - Setting slider_placement : 0, as llap_daemon_container_size : 53248 > 0.5 * YARN NodeManager Memory(58368)
2020-01-21 05:26:54,269 - LLAP start command: /usr/hdp/current/hive-server2/bin/hive --service llap --size 53248m --startImmediately --name llap0 --cache 20480m --xmx 26214m --loglevel INFO --output /var/lib/ambari-agent/tmp/llap-yarn-service_2020-01-21_05-26-54 --user hive --service-placement 0 --skiphadoopversion --skiphbasecp --instances 1 --logger query-routing --args " -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=70 -XX:+UnlockExperimentalVMOptions -XX:G1MaxNewSizePercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m"
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See <a href="<a href="http://www.slf4j.org/codes.html#multiple_bindings" target="_blank">http://www.slf4j.org/codes.html#multiple_bindings</a>" target="_blank"><a href="http://www.slf4j.org/codes.html#multiple_bindings</a" target="_blank">http://www.slf4j.org/codes.html#multiple_bindings</a</a>> for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist
WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist
WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: [hive.aux.jars.path]
WARN cli.LlapServiceDriver: Java versions might not match : JAVA_HOME=[/usr/jdk64/jdk1.8.0_112],process jre=[/usr/jdk64/jdk1.8.0_112/jre]
WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist
WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist
WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist
WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
WARN metastore.HiveMetaStoreClient: Failed to connect to the MetaStore Server...
Failed: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:578)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:119)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:86)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4656)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4724)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4704)
	at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4995)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.downloadPermanentFunctions(LlapServiceDriver.java:716)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.access$400(LlapServiceDriver.java:87)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver$4.call(LlapServiceDriver.java:498)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver$4.call(LlapServiceDriver.java:490)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
	... 17 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.NoRouteToHostException: No route to host (Host unreachable)
	at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:544)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:225)
	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:96)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4656)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4724)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4704)
	at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4995)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.downloadPermanentFunctions(LlapServiceDriver.java:716)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.access$400(LlapServiceDriver.java:87)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver$4.call(LlapServiceDriver.java:498)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver$4.call(LlapServiceDriver.java:490)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
	... 25 more
)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:597)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:225)
	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:96)
	... 22 more

Command failed after 1 tries

 

 

 

Has anyone successfully installed this service on an HDP 3.1.0 machine? If yes, it would be of great help if you can explain the process

 

We followed the Cloudera documentation and it always fails with the above error.

 

Normal Hive is working fine and I am able to run apps without any issues.

 

 

3 REPLIES 3

Re: Hive LLAP installation fails with unable to connect to Metastore Error

Explorer

@VidyaSargur  Anyone who can help here?

Highlighted

Re: Hive LLAP installation fails with unable to connect to Metastore Error

Master Collaborator

When LLAP connects to the metastore, it is done from within the yarn/tez container.  You need to make sure that your hive user has permissions to access the metastore.    To avoid this issue I always create my hive user with wildcard as follows:


CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;

 

Then, when required I add specific host (hdp.cloudera.com) as follows:


CREATE USER 'hive'@'hdp.cloudera.com' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hdp.cloudera.com' WITH GRANT OPTION;
FLUSH PRIVILEGES;

 

 

Additional, within your error was statement "no route to host".  Ensure that your cluster is using fqdns (fully qualified domains), with appropriate /etc/hosts entries, and appropriate hostnames (ie NOT "localhost") in hive configs, and that all the cluster host(s) are able to connect to hive metastore using the user you created above.  

 

mysql -u [user] -p [password] --h [hostname]

 

 

If this answer helps solve your problem, please mark it as Accepted.

 


 


If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post.  


 


Thanks,



Steven

Highlighted

Re: Hive LLAP installation fails with unable to connect to Metastore Error

Explorer

@stevenmatison Thanks for your reply and suggestions.

 

It turns out that the issues were in the order of execution.

 

Nowhere in the official documentation say that you should not save the Hive configs after making the changes for LLAP.

 

What happens if you save the hive configurations without modifying all other properties like yarn queue/other yarn memory configurations is that Ambari tries to start the hiveserver2interactive service and fails thanks to some missing configuration. 

 

What I did was that I only saved the config changes for llap and restarted required services in Hive after I  made all the other required config changes in other services like HDFS/YARN and restarted those services successfully.

 

Then it went through successfully.

Don't have an account?
Coming from Hortonworks? Activate your account here