Support Questions

Find answers, ask questions, and share your expertise

HDP 3.1 hiveserver2 not starting

avatar
Contributor

Hi there, I'm facing this problem for 2 days now and maybe you guys can find it.

I'm not the only one facing this issue but none of the given solutions fixed my problem.

 

Hive metastore is running and hiveserver2 is not saying connection refused I will put my exact error messages here too.

 

 

        Connection failed on host localhost:10000 (Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 204, in execute
    ldap_password=ldap_password)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 84, in check_thrift_port_sasl
    timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
    returns=self.resource.returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of 'beeline -n hive -u 'jdbc:hive2://localhost:10000/;transportMode=binary'  -e ';' 2>&1 | awk '{print}' | grep -i -e 'Connected to:' -e 'Transaction isolation:'' returned 1. 
)
      

 

and it gives this when I am trying to start hiveserver2:

 

2020-01-30 10:56:14,981 - Execution of 'cat /var/run/hive/hive-server.pid 1>/tmp/tmp8y1m5V 2>/tmp/tmpRyZFzp' returned 1. cat: /var/run/hive/hive-server.pid: No such file or directory

2020-01-30 10:56:14,982 - get_user_call_output returned (1, u'', u'cat: /var/run/hive/hive-server.pid: No such file or directory')
2020-01-30 10:56:14,982 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/hdp/current/hive-server2/conf/ --service metatool -listFSRoot' 2>/dev/null | grep hdfs:// | cut -f1,2,3 -d '/' | grep -v 'hdfs://localhost:8020' | head -1'] {}
2020-01-30 10:56:26,621 - call returned (0, '')
2020-01-30 10:56:26,621 - Execute['/var/lib/ambari-agent/tmp/start_hiveserver2_script /var/log/hive/hive-server2.out /var/log/hive/hive-server2.err /var/run/hive/hive-server.pid /usr/hdp/current/hive-server2/conf/ /etc/tez/conf'] {'environment': {'HIVE_BIN': 'hive', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112', 'HADOOP_HOME': u'/usr/hdp/current/hadoop-client'}, 'not_if': 'ls /var/run/hive/hive-server.pid >/dev/null 2>&1 && ps -p  >/dev/null 2>&1', 'user': 'hive', 'path': [u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-server2/bin:/usr/hdp/3.1.4.0-315/hadoop/bin']}
2020-01-30 10:56:26,652 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/hive-server2/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://novalinq-stoeptegel/hive?createDatabaseIfNotExist=true' hive [PROTECTED] com.mysql.jdbc.Driver'] {'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2020-01-30 10:56:27,434 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181 ls /hiveserver2test | grep 'serverUri=''] {}
2020-01-30 10:56:28,121 - call returned (1, 'Node does not exist: /hiveserver2test')
2020-01-30 10:56:28,121 - Will retry 29 time(s), caught exception: ZooKeeper node /hiveserver2test is not ready yet. Sleeping for 10 sec(s)

 

I've also tried it with Zookeeper node /hiveserver2 (without 'test') and that did not work either.

In the hiveserver2.log file I can not find anything particular, can you? (how do I attach a file?)

 

I have tried the following (without succes):

  1. Re-installed Hive service
  2. Checked hive-site.xml (everything seems right)
  3. Checked if the ports that Hive is using are free (they are so far I know)
  4. Checked in the ZkCli.sh if I could find hiveserver2 (can not find it, it is not there)
  5. Tried DEBUG mode for Hive, did not get any other messages (bit weird).

I am running HDP 3.1.4.0 single node cluster (not yet for production, first getting everything started)

 

Thank you in advance!

 

Dewi

 

@Shelton 

 

 

1 ACCEPTED SOLUTION

avatar
Master Mentor

@dewi 

As we repeatedly see this WARNING:  

 

2020-01-30T09:25:38,383 WARN  [main]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (10 of 24) after 5s. getCurrentNotificationEventId
org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId

 


Hence can you please try this to see if this works for you?

Login to Ambari UI --> Hive --> Configs (Tab) --> Custom hive-site.xml   Click on "Add" property button and then add the following property:

 

hive.metastore.event.db.notification.api.auth=false

 

Then restart HiveServer2

 

.

.

Also in your log we see the following ERROR:

 

2020-01-30T09:25:38,225 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics242742500630655243json to /tmp/report.json

 

 

So can you lease check what ios the permission and ownership on the mentioned file? It should be owned and writable by "hive:hadoop" user.

Example:

 

# ls -lart /tmp/report.json
-rw-r--r--. 1 hive hadoop 3300 Jan 30 13:16 /tmp/report.json

 

.

 

 

 

 

 

 

 

 

View solution in original post

9 REPLIES 9

avatar
Master Mentor

@dewi 

The problem does not look seem to be related to the Znode. Once the Hive Server2 is started successfully then you should see the znode.

 

The issue seems to be the use of "localhost". As we see the following output in your output.

Connection failed on host localhost:10000 
.
.
beeline -n hive -u 'jdbc:hive2://localhost:10000/;transportMode=binary'


Ideally those connection strings should be showing the FQDN (Fully Qualified Hostname) instead of showing "localhost".

So please check if the Hive Server config is hardcoded with "localhost" address anywhere or if there is anything wrong with the hostname "/etc/hosts" ...etc

# grep 'localhost' /etc/hive/conf/*.xml

.

 

Manual Startup Testing

Also please verify if you are able to start the HiveServer2 using command line. As mentioned in 

# su - hive
# nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=/tmp/hiveserver2HD.out 2 /tmp/hiveserver2HD.log

 

.

Then check the port 10000 is listening to 0.0.0.0 OR localhost?

# netstat -tnlpa | grep `cat /var/run/hive/hive-server.pid`

.

.

 

avatar
Contributor

Hi sorry, I changed FQDN manually in the post to 'localhost'

 

I've tried setup hiveserver manually but I get nothing in respond?

Also when I check port 10000 the terminal returns empty

avatar
Master Mentor

@dewi 
What do you mean by "I've tried setup hiveserver manually " ?

Can you please share the details how you did this?  Or have you used to setup Hive?

 

Also can you please share the logs of HiveServer2 for it's Manual command line startup?  Did you notice any error in the hiveserver2 log when you attempted to start it via command line ?   If yes then can you please share the hive conf and full log ?  (you can mask the passwords/hostname)

.

Also have you made any special config changes like hive.server2.support.dynamic.service.discovery? Have you disabled the "hive.server2.support.dynamic.service.discovery" setting in your "Advanced hive-site" ?

It will be good to see the HS2 log and Hive Configs.

avatar
Contributor

I have not changed any default properties for Hive or hive-site

/usr/hdp/current/hive-server2/conf/hive-site.xml

 

I will send the log in an other reply.

(I don't know how to make an attachment in messages)

avatar
Contributor
  <configuration  xmlns:xi="<a href="http://www.w3.org/2001/XInclude" target="_blank">http://www.w3.org/2001/XInclude</a>">
    
    <property>
      <name>ambari.hive.db.schema.name</name>
      <value>hive</value>
    </property>
    
    <property>
      <name>atlas.hook.hive.maxThreads</name>
      <value>1</value>
    </property>
    
    <property>
      <name>atlas.hook.hive.minThreads</name>
      <value>1</value>
    </property>
    
    <property>
      <name>credentialStoreClassPath</name>
      <value>/var/lib/ambari-agent/cred/lib/*</value>
    </property>
    
    <property>
      <name>datanucleus.autoCreateSchema</name>
      <value>false</value>
    </property>
    
    <property>
      <name>datanucleus.cache.level2.type</name>
      <value>none</value>
    </property>
    
    <property>
      <name>datanucleus.fixedDatastore</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hadoop.security.credential.provider.path</name>
      <value>jceks://file/usr/hdp/current/hive-server2/conf/hive-site.jceks</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask.size</name>
      <value>1431655765</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join.to.mapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.cbo.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.cli.print.header</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.class</name>
      <value>org.apache.hadoop.hive.thrift.ZooKeeperTokenStore</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.connectString</name>
      <value>novalinq-stoeptegel:2181</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.znode</name>
      <value>/hive/cluster/delegation</value>
    </property>
    
    <property>
      <name>hive.compactor.abortedtxn.threshold</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.compactor.check.interval</name>
      <value>300</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.num.threshold</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.pct.threshold</name>
      <value>0.1f</value>
    </property>
    
    <property>
      <name>hive.compactor.initiator.on</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.threads</name>
      <value>1</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.timeout</name>
      <value>86400</value>
    </property>
    
    <property>
      <name>hive.compute.query.using.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.convert.join.bucket.mapjoin.tez</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.create.as.insert.only</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.default.fileformat</name>
      <value>TextFile</value>
    </property>
    
    <property>
      <name>hive.default.fileformat.managed</name>
      <value>ORC</value>
    </property>
    
    <property>
      <name>hive.driver.parallel.compilation</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.enforce.sortmergebucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.compress.intermediate</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.compress.output</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition.mode</name>
      <value>nonstrict</value>
    </property>
    
    <property>
      <name>hive.exec.failure.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook</value>
    </property>
    
    <property>
      <name>hive.exec.max.created.files</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions</name>
      <value>5000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions.pernode</name>
      <value>2000</value>
    </property>
    
    <property>
      <name>hive.exec.orc.split.strategy</name>
      <value>HYBRID</value>
    </property>
    
    <property>
      <name>hive.exec.parallel</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.parallel.thread.number</name>
      <value>8</value>
    </property>
    
    <property>
      <name>hive.exec.post.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook</value>
    </property>
    
    <property>
      <name>hive.exec.pre.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.bytes.per.reducer</name>
      <value>67108864</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.max</name>
      <value>1009</value>
    </property>
    
    <property>
      <name>hive.exec.scratchdir</name>
      <value>/tmp/hive</value>
    </property>
    
    <property>
      <name>hive.exec.submit.local.task.via.child</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.submitviachild</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.execution.engine</name>
      <value>tez</value>
    </property>
    
    <property>
      <name>hive.execution.mode</name>
      <value>container</value>
    </property>
    
    <property>
      <name>hive.fetch.task.aggr</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion</name>
      <value>more</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion.threshold</name>
      <value>1073741824</value>
    </property>
    
    <property>
      <name>hive.heapsize</name>
      <value>1024</value>
    </property>
    
    <property>
      <name>hive.hook.proto.base-directory</name>
      <value>/warehouse/tablespace/external/hive/sys.db/query_data/</value>
    </property>
    
    <property>
      <name>hive.limit.optimize.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.limit.pushdown.memory.usage</name>
      <value>0.04</value>
    </property>
    
    <property>
      <name>hive.load.data.owner</name>
      <value>hive</value>
    </property>
    
    <property>
      <name>hive.lock.manager</name>
      <value></value>
    </property>
    
    <property>
      <name>hive.map.aggr</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.force.flush.memory.threshold</name>
      <value>0.9</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.min.reduction</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.percentmemory</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.mapjoin.bucket.cache.size</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.mapjoin.hybridgrace.hashtable</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.mapjoin.optimized.hashtable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.mapred.reduce.tasks.speculative.execution</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.materializedview.rewriting.incremental</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.mapfiles</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.mapredfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.orcfile.stripe.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.rcfile.block.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.size.per.task</name>
      <value>256000000</value>
    </property>
    
    <property>
      <name>hive.merge.smallfiles.avgsize</name>
      <value>16000000</value>
    </property>
    
    <property>
      <name>hive.merge.tezfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.authorization.storage.checks</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.cache.pinobjtypes</name>
      <value>Table,Database,Type,FieldSchema,Order</value>
    </property>
    
    <property>
      <name>hive.metastore.client.connect.retry.delay</name>
      <value>5s</value>
    </property>
    
    <property>
      <name>hive.metastore.client.socket.timeout</name>
      <value>1800s</value>
    </property>
    
    <property>
      <name>hive.metastore.connect.retries</name>
      <value>24</value>
    </property>
    
    <property>
      <name>hive.metastore.db.type</name>
      <value>MYSQL</value>
    </property>
    
    <property>
      <name>hive.metastore.dml.events</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.metastore.event.listeners</name>
      <value></value>
    </property>
    
    <property>
      <name>hive.metastore.execute.setugi</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.metastore.failure.retries</name>
      <value>24</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.keytab.file</name>
      <value>/etc/security/keytabs/hive.service.keytab</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.principal</name>
      <value>hive/_HOST@EXAMPLE.COM</value>
    </property>
    
    <property>
      <name>hive.metastore.pre.event.listeners</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value>
    </property>
    
    <property>
      <name>hive.metastore.sasl.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.server.max.threads</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.metastore.transactional.event.listeners</name>
      <value>org.apache.hive.hcatalog.listener.DbNotificationListener</value>
    </property>
    
    <property>
      <name>hive.metastore.uris</name>
      <value>thrift://novalinq-stoeptegel:9083</value>
    </property>
    
    <property>
      <name>hive.metastore.warehouse.dir</name>
      <value>/warehouse/tablespace/managed/hive</value>
    </property>
    
    <property>
      <name>hive.metastore.warehouse.external.dir</name>
      <value>/warehouse/tablespace/external/hive</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin.sortedmerge</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.optimize.constant.propagation</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.dynamic.partition.hashjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.index.filter</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.metadataonly</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.null.scan</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication.min.reducer</name>
      <value>4</value>
    </property>
    
    <property>
      <name>hive.optimize.sort.dynamic.partition</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.orc.compute.splits.num.threads</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.orc.splits.include.file.footer</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.numcontainers</name>
      <value>3</value>
    </property>
    
    <property>
      <name>hive.repl.cm.enabled</name>
      <value></value>
    </property>
    
    <property>
      <name>hive.repl.cmrootdir</name>
      <value></value>
    </property>
    
    <property>
      <name>hive.repl.rootdir</name>
      <value></value>
    </property>
    
    <property>
      <name>hive.security.metastore.authenticator.manager</name>
      <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.auth.reads</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.manager</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
    </property>
    
    <property>
      <name>hive.server2.allow.user.substitution</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.authentication</name>
      <value>NONE</value>
    </property>
    
    <property>
      <name>hive.server2.authentication.spnego.keytab</name>
      <value>HTTP/_HOST@EXAMPLE.COM</value>
    </property>
    
    <property>
      <name>hive.server2.authentication.spnego.principal</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>
    
    <property>
      <name>hive.server2.enable.doAs</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.idle.operation.timeout</name>
      <value>6h</value>
    </property>
    
    <property>
      <name>hive.server2.idle.session.timeout</name>
      <value>1d</value>
    </property>
    
    <property>
      <name>hive.server2.logging.operation.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.logging.operation.log.location</name>
      <value>/tmp/hive/operation_logs</value>
    </property>
    
    <property>
      <name>hive.server2.max.start.attempts</name>
      <value>5</value>
    </property>
    
    <property>
      <name>hive.server2.support.dynamic.service.discovery</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.table.type.mapping</name>
      <value>CLASSIC</value>
    </property>
    
    <property>
      <name>hive.server2.tez.default.queues</name>
      <value>default</value>
    </property>
    
    <property>
      <name>hive.server2.tez.initialize.default.sessions</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.tez.sessions.per.default.queue</name>
      <value>1</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.http.path</name>
      <value>cliservice</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.http.port</name>
      <value>10001</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.max.worker.threads</name>
      <value>500</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.port</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.sasl.qop</name>
      <value>auth</value>
    </property>
    
    <property>
      <name>hive.server2.transport.mode</name>
      <value>binary</value>
    </property>
    
    <property>
      <name>hive.server2.use.SSL</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.webui.cors.allowed.headers</name>
      <value>X-Requested-With,Content-Type,Accept,Origin,X-Requested-By,x-requested-by</value>
    </property>
    
    <property>
      <name>hive.server2.webui.enable.cors</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.webui.port</name>
      <value>10002</value>
    </property>
    
    <property>
      <name>hive.server2.webui.use.ssl</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.zookeeper.namespace</name>
      <value>hiveserver2</value>
    </property>
    
    <property>
      <name>hive.service.metrics.codahale.reporter.classes</name>
      <value>org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter,org.apache.hadoop.hive.common.metrics.metrics2.JmxMetricsReporter,org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter</value>
    </property>
    
    <property>
      <name>hive.smbjoin.cache.rows</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.stats.autogather</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.stats.dbclass</name>
      <value>fs</value>
    </property>
    
    <property>
      <name>hive.stats.fetch.column.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.stats.fetch.partition.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.strict.managed.tables</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.support.concurrency</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.auto.reducer.parallelism</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.bucket.pruning</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.cartesian-product.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.container.size</name>
      <value>5120</value>
    </property>
    
    <property>
      <name>hive.tez.cpu.vcores</name>
      <value>-1</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning.max.data.size</name>
      <value>104857600</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning.max.event.size</name>
      <value>1048576</value>
    </property>
    
    <property>
      <name>hive.tez.exec.print.summary</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.input.format</name>
      <value>org.apache.hadoop.hive.ql.io.HiveInputFormat</value>
    </property>
    
    <property>
      <name>hive.tez.input.generate.consistent.splits</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.java.opts</name>
      <value>-server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps</value>
    </property>
    
    <property>
      <name>hive.tez.log.level</name>
      <value>INFO</value>
    </property>
    
    <property>
      <name>hive.tez.max.partition.factor</name>
      <value>2.0</value>
    </property>
    
    <property>
      <name>hive.tez.min.partition.factor</name>
      <value>0.25</value>
    </property>
    
    <property>
      <name>hive.tez.smb.number.waves</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.txn.manager</name>
      <value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
    </property>
    
    <property>
      <name>hive.txn.max.open.batch</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.txn.strict.locking.mode</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.txn.timeout</name>
      <value>300</value>
    </property>
    
    <property>
      <name>hive.user.install.directory</name>
      <value>/user/</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.mapjoin.minmax.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.mapjoin.native.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.reduce.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.checkinterval</name>
      <value>4096</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.flush.percent</name>
      <value>0.1</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.maxentries</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.zookeeper.client.port</name>
      <value>2181</value>
    </property>
    
    <property>
      <name>hive.zookeeper.namespace</name>
      <value>hive_zookeeper_namespace</value>
    </property>
    
    <property>
      <name>hive.zookeeper.quorum</name>
      <value>novalinq-stoeptegel:2181</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://novalinq-stoeptegel/hive</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>hive</value>
    </property>
    
    <property>
      <name>metastore.create.as.acid</name>
      <value>true</value>
    </property>
    
  </configuration>

avatar
Contributor
2020-01-30T09:25:38,225 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics242742500630655243json to /tmp/report.json
2020-01-30T09:25:38,226 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename
java.nio.file.FileSystemException: /tmp/hmetrics242742500630655243json -> /tmp/report.json: Operation not permitted
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:1.8.0_112]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_112]
	at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) ~[?:1.8.0_112]
	at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) ~[?:1.8.0_112]
	at java.nio.file.Files.move(Files.java:1395) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter.run(JsonFileMetricsReporter.java:175) ~[hive-common-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
2020-01-30T09:25:38,369 INFO  [main]: metastore.RetryingMetaStoreClient (:()) - RetryingMetaStoreClient trying reconnect as hive (auth:SIMPLE)
2020-01-30T09:25:38,369 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Closed a connection to metastore, current connections: 0
2020-01-30T09:25:38,369 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Trying to connect to metastore with URI thrift://novalinq-stoeptegel:9083
2020-01-30T09:25:38,370 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Opened a connection to metastore, current connections: 1
2020-01-30T09:25:38,372 INFO  [main]: metastore.HiveMetaStoreClient (:()) - Connected to metastore.
2020-01-30T09:25:38,383 WARN  [main]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (10 of 24) after 5s. getCurrentNotificationEventId
org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId
	at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_current_notificationEventId(ThriftHiveMetastore.java:5848) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_current_notificationEventId(ThriftHiveMetastore.java:5836) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getCurrentNotificationEventId(HiveMetaStoreClient.java:2945) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at com.sun.proxy.$Proxy37.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3001) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at com.sun.proxy.$Proxy37.getCurrentNotificationEventId(Unknown Source) ~[?:?]
	at org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getCurrentNotificationEventId(EventUtils.java:75) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.<init>(NotificationEventPoll.java:100) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.metadata.events.NotificationEventPoll.initialize(NotificationEventPoll.java:56) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:275) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.access$1700(HiveServer2.java:136) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1346) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1190) ~[hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
2020-01-30T09:25:42,923 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics4047442475965513030json to /tmp/report.json
2020-01-30T09:25:42,923 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename
java.nio.file.FileSystemException: /tmp/hmetrics4047442475965513030json -> /tmp/report.json: Operation not permitted
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:1.8.0_112]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_112]
	at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) ~[?:1.8.0_112]
	at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) ~[?:1.8.0_112]
	at java.nio.file.Files.move(Files.java:1395) ~[?:1.8.0_112]
	at org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter.run(JsonFileMetricsReporter.java:175) ~[hive-common-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_112]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]

In hiveserver2.log it seems it is giving this error over and over again, before shutting hiveserver2 down

avatar
Master Mentor

@dewi 

As we repeatedly see this WARNING:  

 

2020-01-30T09:25:38,383 WARN  [main]: metastore.RetryingMetaStoreClient (:()) - MetaStoreClient lost connection. Attempting to reconnect (10 of 24) after 5s. getCurrentNotificationEventId
org.apache.thrift.TApplicationException: Internal error processing get_current_notificationEventId

 


Hence can you please try this to see if this works for you?

Login to Ambari UI --> Hive --> Configs (Tab) --> Custom hive-site.xml   Click on "Add" property button and then add the following property:

 

hive.metastore.event.db.notification.api.auth=false

 

Then restart HiveServer2

 

.

.

Also in your log we see the following ERROR:

 

2020-01-30T09:25:38,225 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics242742500630655243json to /tmp/report.json

 

 

So can you lease check what ios the permission and ownership on the mentioned file? It should be owned and writable by "hive:hadoop" user.

Example:

 

# ls -lart /tmp/report.json
-rw-r--r--. 1 hive hadoop 3300 Jan 30 13:16 /tmp/report.json

 

.

 

 

 

 

 

 

 

 

avatar
Contributor

# ls -lart /tmp/report.json

Output:
-rw-r--r-- 1 hive hadoop 8861 jan 30 14:30 /tmp/report.json
This seems right.

 

Added this property to custom hive-site in Ambari

key=hive.metastore value=hive.metastore.event.db.notification.api.auth=false

Still hiveserver2.log returns same error when restarting

 

avatar
Contributor

YES! This solved my problem, it seems to be a bug in hiveserver mentioned in Apache bug reports.

 

hive.metastore.event.db.notification.api.auth=false