<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDP 3.1 hiveserver2 not starting in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288664#M213782</link>
    <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/73427"&gt;@dewi&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As we repeatedly see this &lt;FONT color="#FF6600"&gt;&lt;STRONG&gt;WARNING&lt;/STRONG&gt;&lt;/FONT&gt;:&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;2020-01-30T09:25:38,383 WARN  [main]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (10 of 24) after 5s. getCurrentNotificationEventId
org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Hence can you please try this to see if this works for you?&lt;/P&gt;&lt;P&gt;Login to Ambari UI --&amp;gt; Hive --&amp;gt; Configs (Tab) --&amp;gt;&amp;nbsp;Custom hive-site.xml&amp;nbsp; &amp;nbsp;Click on "Add" property button and then add the following property:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;hive.metastore.event.db.notification.api.auth=false&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Then restart HiveServer2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;Also in your log we see the following &lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;ERROR&lt;/STRONG&gt;&lt;/FONT&gt;:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;2020-01-30T09:25:38,225 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics242742500630655243json to /tmp/report.json&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So can you lease check what ios the permission and ownership on the mentioned file? It should be owned and writable by "hive:hadoop" user.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# ls -lart /tmp/report.json
-rw-r--r--. 1 hive hadoop 3300 Jan 30 13:16 /tmp/report.json&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 30 Jan 2020 13:18:34 GMT</pubDate>
    <dc:creator>jsensharma</dc:creator>
    <dc:date>2020-01-30T13:18:34Z</dc:date>
    <item>
      <title>HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288619#M213757</link>
      <description>&lt;P&gt;Hi there, I'm facing this problem for 2 days now and maybe you guys can find it.&lt;/P&gt;&lt;P&gt;I'm not the only one facing this issue but none of the given solutions fixed my problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hive metastore is running and hiveserver2 is not saying connection refused I will put my exact error messages here too.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;        Connection failed on host localhost:10000 (Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 204, in execute
    ldap_password=ldap_password)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 84, in check_thrift_port_sasl
    timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
    returns=self.resource.returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of 'beeline -n hive -u 'jdbc:hive2://localhost:10000/;transportMode=binary'  -e ';' 2&amp;gt;&amp;amp;1 | awk '{print}' | grep -i -e 'Connected to:' -e 'Transaction isolation:'' returned 1. 
)
      &lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and it gives this when I am trying to start hiveserver2:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;2020-01-30 10:56:14,981 - Execution of 'cat /var/run/hive/hive-server.pid 1&amp;gt;/tmp/tmp8y1m5V 2&amp;gt;/tmp/tmpRyZFzp' returned 1. cat: /var/run/hive/hive-server.pid: No such file or directory

2020-01-30 10:56:14,982 - get_user_call_output returned (1, u'', u'cat: /var/run/hive/hive-server.pid: No such file or directory')
2020-01-30 10:56:14,982 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/hdp/current/hive-server2/conf/ --service metatool -listFSRoot' 2&amp;gt;/dev/null | grep hdfs:// | cut -f1,2,3 -d '/' | grep -v 'hdfs://localhost:8020' | head -1'] {}
2020-01-30 10:56:26,621 - call returned (0, '')
2020-01-30 10:56:26,621 - Execute['/var/lib/ambari-agent/tmp/start_hiveserver2_script /var/log/hive/hive-server2.out /var/log/hive/hive-server2.err /var/run/hive/hive-server.pid /usr/hdp/current/hive-server2/conf/ /etc/tez/conf'] {'environment': {'HIVE_BIN': 'hive', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112', 'HADOOP_HOME': u'/usr/hdp/current/hadoop-client'}, 'not_if': 'ls /var/run/hive/hive-server.pid &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp; ps -p  &amp;gt;/dev/null 2&amp;gt;&amp;amp;1', 'user': 'hive', 'path': [u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-server2/bin:/usr/hdp/3.1.4.0-315/hadoop/bin']}
2020-01-30 10:56:26,652 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/hive-server2/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://novalinq-stoeptegel/hive?createDatabaseIfNotExist=true' hive [PROTECTED] com.mysql.jdbc.Driver'] {'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2020-01-30 10:56:27,434 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181 ls /hiveserver2test | grep 'serverUri=''] {}
2020-01-30 10:56:28,121 - call returned (1, 'Node does not exist: /hiveserver2test')
2020-01-30 10:56:28,121 - Will retry 29 time(s), caught exception: ZooKeeper node /hiveserver2test is not ready yet. Sleeping for 10 sec(s)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've also tried it with Zookeeper node /hiveserver2 (without 'test') and that did not work either.&lt;/P&gt;&lt;P&gt;In the hiveserver2.log file I can not find anything particular, can you? (how do I attach a file?)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have tried the following (without succes):&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Re-installed Hive service&lt;/LI&gt;&lt;LI&gt;Checked hive-site.xml (everything seems right)&lt;/LI&gt;&lt;LI&gt;Checked if the ports that Hive is using are free (they are so far I know)&lt;/LI&gt;&lt;LI&gt;Checked in the ZkCli.sh if I could find hiveserver2 (can not find it, it is not there)&lt;/LI&gt;&lt;LI&gt;Tried DEBUG mode for Hive, did not get any other messages (bit weird).&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;I am running HDP 3.1.4.0 single node cluster (not yet for production, first getting everything started)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you in advance!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Dewi&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/20288"&gt;@Shelton&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 10:17:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288619#M213757</guid>
      <dc:creator>dewi</dc:creator>
      <dc:date>2020-01-30T10:17:24Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288636#M213762</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/73427"&gt;@dewi&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The problem does not look seem to be related to the Znode. Once the Hive Server2 is started successfully then you should see the znode.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The issue seems to be the use of "localhost". As we see the following output in your output.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Connection failed on host localhost:10000 
.
.
beeline -n hive -u 'jdbc:hive2://localhost:10000/;transportMode=binary'&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;Ideally those connection strings should be showing the &lt;EM&gt;&lt;STRONG&gt;FQDN (Fully Qualified Hostname)&lt;/STRONG&gt;&lt;/EM&gt; instead of showing "&lt;STRONG&gt;localhost&lt;/STRONG&gt;".&lt;/P&gt;&lt;P&gt;So please check if the Hive Server config is hardcoded with "localhost" address anywhere or if there is anything wrong with the hostname "/etc/hosts" ...etc&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# grep 'localhost' /etc/hive/conf/*.xml&lt;/LI-CODE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Manual Startup Testing&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Also&lt;/STRONG&gt; please verify if you are able to start the HiveServer2 using command line. As mentioned in&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# su - hive
# nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=/tmp/hiveserver2HD.out 2 /tmp/hiveserver2HD.log&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;Then check the port 10000 is listening to &lt;STRONG&gt;0.0.0.0&lt;/STRONG&gt; OR &lt;STRONG&gt;localhost&lt;/STRONG&gt;?&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# netstat -tnlpa | grep `cat /var/run/hive/hive-server.pid`&lt;/LI-CODE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 11:34:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288636#M213762</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2020-01-30T11:34:54Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288638#M213764</link>
      <description>&lt;P&gt;Hi sorry, I changed FQDN manually in the post to 'localhost'&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've tried setup hiveserver manually but I get nothing in respond?&lt;/P&gt;&lt;P&gt;Also when I check port 10000 the terminal returns empty&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 11:41:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288638#M213764</guid>
      <dc:creator>dewi</dc:creator>
      <dc:date>2020-01-30T11:41:18Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288640#M213766</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/73427"&gt;@dewi&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;What do you mean by &lt;EM&gt;&lt;STRONG&gt;"&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN&gt;&lt;EM&gt;I've tried setup hiveserver manually&amp;nbsp;&lt;STRONG&gt;"&lt;/STRONG&gt;&lt;/EM&gt; ?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Can you please share the details how you did this?&amp;nbsp; Or have you used to setup Hive?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Also can you please share the logs of HiveServer2 for it's Manual command line startup?&amp;nbsp; Did you notice&amp;nbsp;any error in the hiveserver2 log when you attempted to start it via command line ?&amp;nbsp; &amp;nbsp;If yes then can you please share the hive conf and full log ?&amp;nbsp; (you can mask the passwords/hostname)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Also have you made any special config changes like&amp;nbsp;hive.server2.support.dynamic.service.discovery? Have you disabled the&amp;nbsp;"hive.server2.support.dynamic.service.discovery" setting in your "Advanced hive-site" ?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;It will be good to see the HS2 log and Hive Configs.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 11:55:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288640#M213766</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2020-01-30T11:55:53Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288649#M213772</link>
      <description>&lt;P&gt;I have not changed any default properties for Hive or hive-site&lt;/P&gt;&lt;P&gt;/usr/hdp/current/hive-server2/conf/hive-site.xml&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I will send the log in an other reply.&lt;/P&gt;&lt;P&gt;(I don't know how to make an attachment in messages)&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 12:18:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288649#M213772</guid>
      <dc:creator>dewi</dc:creator>
      <dc:date>2020-01-30T12:18:50Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288650#M213773</link>
      <description>&lt;LI-CODE lang="markup"&gt;  &amp;lt;configuration  xmlns:xi="&amp;lt;a href="http://www.w3.org/2001/XInclude" target="_blank"&amp;gt;http://www.w3.org/2001/XInclude&amp;lt;/a&amp;gt;"&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;ambari.hive.db.schema.name&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;hive&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;atlas.hook.hive.maxThreads&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;atlas.hook.hive.minThreads&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;credentialStoreClassPath&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/var/lib/ambari-agent/cred/lib/*&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;datanucleus.autoCreateSchema&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;datanucleus.cache.level2.type&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;none&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;datanucleus.fixedDatastore&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hadoop.security.credential.provider.path&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;jceks://file/usr/hdp/current/hive-server2/conf/hive-site.jceks&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.auto.convert.join&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.auto.convert.join.noconditionaltask&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.auto.convert.join.noconditionaltask.size&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1431655765&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.auto.convert.sortmerge.join&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.auto.convert.sortmerge.join.to.mapjoin&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.cbo.enable&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.cli.print.header&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.cluster.delegation.token.store.class&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.thrift.ZooKeeperTokenStore&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.cluster.delegation.token.store.zookeeper.connectString&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;novalinq-stoeptegel:2181&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.cluster.delegation.token.store.zookeeper.znode&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/hive/cluster/delegation&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compactor.abortedtxn.threshold&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compactor.check.interval&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;300&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compactor.delta.num.threshold&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;10&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compactor.delta.pct.threshold&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.1f&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compactor.initiator.on&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compactor.worker.threads&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compactor.worker.timeout&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;86400&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.compute.query.using.stats&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.convert.join.bucket.mapjoin.tez&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.create.as.insert.only&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.default.fileformat&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;TextFile&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.default.fileformat.managed&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;ORC&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.driver.parallel.compilation&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.enforce.sortmergebucketmapjoin&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.compress.intermediate&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.compress.output&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.dynamic.partition&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.dynamic.partition.mode&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;nonstrict&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.failure.hooks&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.max.created.files&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;100000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.max.dynamic.partitions&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;5000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.max.dynamic.partitions.pernode&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;2000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.orc.split.strategy&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;HYBRID&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.parallel&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.parallel.thread.number&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;8&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.post.hooks&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.pre.hooks&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.reducers.bytes.per.reducer&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;67108864&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.reducers.max&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1009&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.scratchdir&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/tmp/hive&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.submit.local.task.via.child&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.exec.submitviachild&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.execution.engine&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;tez&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.execution.mode&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;container&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.fetch.task.aggr&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.fetch.task.conversion&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;more&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.fetch.task.conversion.threshold&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1073741824&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.heapsize&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1024&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.hook.proto.base-directory&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/warehouse/tablespace/external/hive/sys.db/query_data/&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.limit.optimize.enable&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.limit.pushdown.memory.usage&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.04&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.load.data.owner&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;hive&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.lock.manager&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.map.aggr&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.map.aggr.hash.force.flush.memory.threshold&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.9&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.map.aggr.hash.min.reduction&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.5&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.map.aggr.hash.percentmemory&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.5&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.mapjoin.bucket.cache.size&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;10000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.mapjoin.hybridgrace.hashtable&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.mapjoin.optimized.hashtable&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.mapred.reduce.tasks.speculative.execution&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.materializedview.rewriting.incremental&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.merge.mapfiles&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.merge.mapredfiles&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.merge.orcfile.stripe.level&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.merge.rcfile.block.level&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.merge.size.per.task&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;256000000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.merge.smallfiles.avgsize&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;16000000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.merge.tezfiles&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.authorization.storage.checks&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.cache.pinobjtypes&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;Table,Database,Type,FieldSchema,Order&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.client.connect.retry.delay&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;5s&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.client.socket.timeout&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1800s&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.connect.retries&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;24&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.db.type&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;MYSQL&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.dml.events&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.event.listeners&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.execute.setugi&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.failure.retries&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;24&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.kerberos.keytab.file&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/etc/security/keytabs/hive.service.keytab&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.kerberos.principal&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;hive/_HOST@EXAMPLE.COM&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.pre.event.listeners&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.sasl.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.server.max.threads&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;100000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.transactional.event.listeners&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hive.hcatalog.listener.DbNotificationListener&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.uris&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;thrift://novalinq-stoeptegel:9083&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.warehouse.dir&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/warehouse/tablespace/managed/hive&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.warehouse.external.dir&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/warehouse/tablespace/external/hive&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.bucketmapjoin&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.bucketmapjoin.sortedmerge&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.constant.propagation&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.dynamic.partition.hashjoin&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.index.filter&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.metadataonly&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.null.scan&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.reducededuplication&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.reducededuplication.min.reducer&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;4&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.optimize.sort.dynamic.partition&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.orc.compute.splits.num.threads&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;10&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.orc.splits.include.file.footer&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.prewarm.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.prewarm.numcontainers&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;3&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.repl.cm.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.repl.cmrootdir&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.repl.rootdir&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.security.metastore.authenticator.manager&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.security.metastore.authorization.auth.reads&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.security.metastore.authorization.manager&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.allow.user.substitution&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.authentication&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;NONE&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.authentication.spnego.keytab&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;HTTP/_HOST@EXAMPLE.COM&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.authentication.spnego.principal&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/etc/security/keytabs/spnego.service.keytab&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.enable.doAs&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.idle.operation.timeout&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;6h&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.idle.session.timeout&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1d&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.logging.operation.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.logging.operation.log.location&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/tmp/hive/operation_logs&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.max.start.attempts&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;5&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.support.dynamic.service.discovery&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.table.type.mapping&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;CLASSIC&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.tez.default.queues&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;default&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.tez.initialize.default.sessions&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.tez.sessions.per.default.queue&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.thrift.http.path&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;cliservice&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.thrift.http.port&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;10001&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.thrift.max.worker.threads&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;500&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.thrift.port&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;10000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.thrift.sasl.qop&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;auth&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.transport.mode&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;binary&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.use.SSL&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.webui.cors.allowed.headers&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;X-Requested-With,Content-Type,Accept,Origin,X-Requested-By,x-requested-by&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.webui.enable.cors&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.webui.port&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;10002&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.webui.use.ssl&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.server2.zookeeper.namespace&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;hiveserver2&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.service.metrics.codahale.reporter.classes&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter,org.apache.hadoop.hive.common.metrics.metrics2.JmxMetricsReporter,org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.smbjoin.cache.rows&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;10000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.stats.autogather&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.stats.dbclass&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;fs&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.stats.fetch.column.stats&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.stats.fetch.partition.stats&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.strict.managed.tables&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.support.concurrency&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.auto.reducer.parallelism&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.bucket.pruning&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.cartesian-product.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.container.size&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;5120&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.cpu.vcores&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;-1&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.dynamic.partition.pruning&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.dynamic.partition.pruning.max.data.size&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;104857600&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.dynamic.partition.pruning.max.event.size&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1048576&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.exec.print.summary&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.input.format&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.io.HiveInputFormat&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.input.generate.consistent.splits&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.java.opts&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;-server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.log.level&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;INFO&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.max.partition.factor&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;2.0&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.min.partition.factor&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.25&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.tez.smb.number.waves&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.5&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.txn.manager&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;org.apache.hadoop.hive.ql.lockmgr.DbTxnManager&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.txn.max.open.batch&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;1000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.txn.strict.locking.mode&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;false&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.txn.timeout&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;300&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.user.install.directory&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;/user/&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.execution.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.execution.mapjoin.minmax.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.execution.mapjoin.native.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.execution.reduce.enabled&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.groupby.checkinterval&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;4096&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.groupby.flush.percent&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;0.1&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.vectorized.groupby.maxentries&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;100000&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.zookeeper.client.port&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;2181&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.zookeeper.namespace&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;hive_zookeeper_namespace&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.zookeeper.quorum&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;novalinq-stoeptegel:2181&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;javax.jdo.option.ConnectionDriverName&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;com.mysql.jdbc.Driver&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;javax.jdo.option.ConnectionURL&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;jdbc:mysql://novalinq-stoeptegel/hive&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;javax.jdo.option.ConnectionUserName&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;hive&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;metastore.create.as.acid&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    
  &amp;lt;/configuration&amp;gt;&lt;/LI-CODE&gt;</description>
      <pubDate>Thu, 30 Jan 2020 12:19:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288650#M213773</guid>
      <dc:creator>dewi</dc:creator>
      <dc:date>2020-01-30T12:19:30Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288662#M213780</link>
      <description>&lt;LI-CODE lang="markup"&gt;2020-01-30T09:25:38,225 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics242742500630655243json to /tmp/report.json
2020-01-30T09:25:38,226 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename
java.nio.file.FileSystemException: /tmp/hmetrics242742500630655243json -&amp;gt; /tmp/report.json: Operation not permitted
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:1.8.0_112]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_112]
	at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) ~[?:1.8.0_112]
	at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) ~[?:1.8.0_112]
	at java.nio.file.Files.move(Files.java:1395) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter.run(JsonFileMetricsReporter.java:175) ~[hive-common-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
2020-01-30T09:25:38,369 INFO  [main]: metastore.RetryingMetaStoreClient (:()) - RetryingMetaStoreClient trying reconnect as hive (auth:SIMPLE)
2020-01-30T09:25:38,369 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Closed a connection to metastore, current connections: 0
2020-01-30T09:25:38,369 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Trying to connect to metastore with URI thrift://novalinq-stoeptegel:9083
2020-01-30T09:25:38,370 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Opened a connection to metastore, current connections: 1
2020-01-30T09:25:38,372 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Connected to metastore.
2020-01-30T09:25:38,383 WARN  [main]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (10 of 24) after 5s. getCurrentNotificationEventId
org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId
	at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_current_notificationEventId(ThriftHiveMetastore.java:5848) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_current_notificationEventId(ThriftHiveMetastore.java:5836) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getCurrentNotificationEventId(HiveMetaStoreClient.java:2945) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at com.sun.proxy.$Proxy37.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3001) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at com.sun.proxy.$Proxy37.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getCurrentNotificationEventId(EventUtils.java:75) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.&amp;lt;init&amp;gt;(NotificationEventPoll.java:100) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.initialize(NotificationEventPoll.java:56) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:275) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.access$1700(HiveServer2.java:136) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1346) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1190) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
2020-01-30T09:25:42,923 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics4047442475965513030json to /tmp/report.json
2020-01-30T09:25:42,923 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename
java.nio.file.FileSystemException: /tmp/hmetrics4047442475965513030json -&amp;gt; /tmp/report.json: Operation not permitted
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:1.8.0_112]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_112]
	at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) ~[?:1.8.0_112]
	at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) ~[?:1.8.0_112]
	at java.nio.file.Files.move(Files.java:1395) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter.run(JsonFileMetricsReporter.java:175) ~[hive-common-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]&lt;/LI-CODE&gt;&lt;P&gt;In hiveserver2.log it seems it is giving this error over and over again, before shutting hiveserver2 down&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 12:59:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288662#M213780</guid>
      <dc:creator>dewi</dc:creator>
      <dc:date>2020-01-30T12:59:51Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288664#M213782</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/73427"&gt;@dewi&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As we repeatedly see this &lt;FONT color="#FF6600"&gt;&lt;STRONG&gt;WARNING&lt;/STRONG&gt;&lt;/FONT&gt;:&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;2020-01-30T09:25:38,383 WARN  [main]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (10 of 24) after 5s. getCurrentNotificationEventId
org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Hence can you please try this to see if this works for you?&lt;/P&gt;&lt;P&gt;Login to Ambari UI --&amp;gt; Hive --&amp;gt; Configs (Tab) --&amp;gt;&amp;nbsp;Custom hive-site.xml&amp;nbsp; &amp;nbsp;Click on "Add" property button and then add the following property:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;hive.metastore.event.db.notification.api.auth=false&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Then restart HiveServer2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;Also in your log we see the following &lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;ERROR&lt;/STRONG&gt;&lt;/FONT&gt;:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;2020-01-30T09:25:38,225 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics242742500630655243json to /tmp/report.json&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So can you lease check what ios the permission and ownership on the mentioned file? It should be owned and writable by "hive:hadoop" user.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# ls -lart /tmp/report.json
-rw-r--r--. 1 hive hadoop 3300 Jan 30 13:16 /tmp/report.json&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 13:18:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288664#M213782</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2020-01-30T13:18:34Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288666#M213784</link>
      <description>&lt;P&gt;# ls -lart /tmp/report.json&lt;/P&gt;&lt;P&gt;Output:&lt;BR /&gt;-rw-r--r-- 1 hive hadoop 8861 jan 30 14:30 /tmp/report.json&lt;BR /&gt;This seems right.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Added this property to custom hive-site in Ambari&lt;/P&gt;&lt;P&gt;key=hive.metastore value=hive.metastore.event.db.notification.api.auth=false&lt;/P&gt;&lt;P&gt;Still hiveserver2.log returns same error when restarting&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 13:44:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/288666#M213784</guid>
      <dc:creator>dewi</dc:creator>
      <dc:date>2020-01-30T13:44:40Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 hiveserver2 not starting</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/300229#M220094</link>
      <description>&lt;P&gt;YES! This solved my problem, it seems to be a bug in hiveserver mentioned in Apache bug reports.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;hive.metastore.event.db.notification.api.auth=false&lt;/PRE&gt;</description>
      <pubDate>Wed, 22 Jul 2020 10:37:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-hiveserver2-not-starting/m-p/300229#M220094</guid>
      <dc:creator>dewi</dc:creator>
      <dc:date>2020-07-22T10:37:42Z</dc:date>
    </item>
  </channel>
</rss>

