Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hiveserver2 Is Going Down Frequently while using Hive on Tez in CDP 7.1.7

avatar
New Contributor
[main]: Error starting HiveServer2 on attempt 3, will retry in 60000ms
org.apache.hive.service.ServiceException: org.apache.hive.service.ServiceException: Unable to setup tez session pool
	at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:721) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1059) [hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.access$1400(HiveServer2.java:138) [hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1333) [hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1177) [hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
	at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.1.7.1.7.0-551.jar:?]
Caused by: org.apache.hive.service.ServiceException: Unable to setup tez session pool
	at org.apache.hive.service.server.HiveServer2.initAndStartTezSessionPoolManager(HiveServer2.java:825) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.startOrReconnectTezSessions(HiveServer2.java:795) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:718) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	... 10 more
Caused by: org.apache.tez.dag.api.TezException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:2048, vCores:1>, maximum allowed allocation=<memory:1024, vCores:2>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:1024, vCores:2>
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:491)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:387)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:315)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:293)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:580)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:392)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:330)
	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:664)
	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:290)
	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:611)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)

	at org.apache.tez.client.TezClient.start(TezClient.java:410) ~[tez-api-0.9.1.7.1.7.0-551.jar:0.9.1.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:536) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:374) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolSession.open(TezSessionPoolSession.java:118) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.startInitialSession(TezSessionPool.java:359) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.startUnderInitLock(TezSessionPool.java:171) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.start(TezSessionPool.java:123) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:115) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.initAndStartTezSessionPoolManager(HiveServer2.java:822) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.startOrReconnectTezSessions(HiveServer2.java:795) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:718) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	... 10 more
Caused by: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:2048, vCores:1>, maximum allowed allocation=<memory:1024, vCores:2>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:1024, vCores:2>
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:491)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:387)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:315)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:293)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:580)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:392)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:330)
	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:664)
	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:290)
	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:611)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_232]
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_232]
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_232]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_232]
	at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) ~[hadoop-yarn-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75) ~[hadoop-yarn-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116) ~[hadoop-yarn-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:304) ~[hadoop-yarn-common-3.1.1.7.1.7.0-551.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at com.sun.proxy.$Proxy43.submitApplication(Unknown Source) ~[?:?]
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:328) ~[hadoop-yarn-client-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.tez.client.TezYarnClient.submitApplication(TezYarnClient.java:77) ~[tez-api-0.9.1.7.1.7.0-551.jar:0.9.1.7.1.7.0-551]
	at org.apache.tez.client.TezClient.start(TezClient.java:405) ~[tez-api-0.9.1.7.1.7.0-551.jar:0.9.1.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:536) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:374) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolSession.open(TezSessionPoolSession.java:118) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.startInitialSession(TezSessionPool.java:359) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.startUnderInitLock(TezSessionPool.java:171) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.start(TezSessionPool.java:123) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:115) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.initAndStartTezSessionPoolManager(HiveServer2.java:822) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.startOrReconnectTezSessions(HiveServer2.java:795) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:718) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	... 10 more
Caused by: org.apache.hadoop.ipc.RemoteException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:2048, vCores:1>, maximum allowed allocation=<memory:1024, vCores:2>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:1024, vCores:2>
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:491)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:387)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:315)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:293)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:580)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:392)
	at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:330)
	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:664)
	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:290)
	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:611)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1508) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1405) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at com.sun.proxy.$Proxy42.submitApplication(Unknown Source) ~[?:?]
	at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:301) ~[hadoop-yarn-common-3.1.1.7.1.7.0-551.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
	at com.sun.proxy.$Proxy43.submitApplication(Unknown Source) ~[?:?]
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:328) ~[hadoop-yarn-client-3.1.1.7.1.7.0-551.jar:?]
	at org.apache.tez.client.TezYarnClient.submitApplication(TezYarnClient.java:77) ~[tez-api-0.9.1.7.1.7.0-551.jar:0.9.1.7.1.7.0-551]
	at org.apache.tez.client.TezClient.start(TezClient.java:405) ~[tez-api-0.9.1.7.1.7.0-551.jar:0.9.1.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:536) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:374) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolSession.open(TezSessionPoolSession.java:118) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.startInitialSession(TezSessionPool.java:359) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.startUnderInitLock(TezSessionPool.java:171) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPool.start(TezSessionPool.java:123) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:115) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.initAndStartTezSessionPoolManager(HiveServer2.java:822) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.startOrReconnectTezSessions(HiveServer2.java:795) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:718) ~[hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	... 10 more

[main-EventThread]: Error stopping schq
java.lang.IllegalStateException: The current ScheduledQueryExecutionService INSTANCE is invalid
	at org.apache.hadoop.hive.ql.scheduled.ScheduledQueryExecutionService.close(ScheduledQueryExecutionService.java:312) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2.stop(HiveServer2.java:892) [hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.hive.service.server.HiveServer2$DeRegisterWatcher.process(HiveServer2.java:617) [hive-service-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551]
	at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:77) [curator-framework-4.3.0.7.1.7.0-551.jar:4.3.0.7.1.7.0-551]
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) [zookeeper-3.5.5.7.1.7.0-551.jar:3.5.5.7.1.7.0-551]
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) [zookeeper-3.5.5.7.1.7.0-551.jar:3.5.5.7.1.7.0-551]
2 REPLIES 2

avatar
Community Manager

Welcome to the community @Drp7. While you wait for a more knowledgable member to chime in, I thought I would point something out that may already be obvious to you. I see the following in the output provided:

org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:2048, vCores:1>, maximum allowed allocation=<memory:1024, vCores:2>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:1024, vCores:2>

 


Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Super Collaborator

The error message indicates that there is an issue related to resource allocation in YARN, the resource manager in Hadoop. Specifically, the error mentions that the requested resource exceeds the maximum allowed allocation. Here are some steps you can take to address this issue:

  1. Review YARN Configuration:

    • Check the YARN configuration settings, particularly those related to resource allocation. Look for properties such as yarn.scheduler.maximum-allocation-mb and yarn.scheduler.maximum-allocation-vcores.
    • Ensure that the values configured for these properties are sufficient for the resources needed by HiveServer2.
  2. Increase Maximum Allocation:

    • If the error persists, you might need to increase the maximum allocation for memory and vCores in the YARN scheduler configuration.
    • Update the yarn.scheduler.maximum-allocation-mb and yarn.scheduler.maximum-allocation-vcores properties in the YARN configuration files.
  3. Check NodeManager Resources:

    • Verify the resources available on the NodeManagers in your cluster. The maximum allowed allocation is calculated based on the maximum resources of registered NodeManagers.
    • If the NodeManagers have sufficient resources, you can adjust the YARN configuration accordingly.
  4. Monitor Resource Usage:

    • Monitor the resource usage in your YARN cluster using tools like the ResourceManager UI or the YARN command-line tools (yarn top, yarn node -list -all, etc.).
    • Identify any patterns of resource exhaustion or contention that could be causing the issue.
  5. Review Hive Configuration:

    • Review the Hive configurations related to resource allocation, such as hive.tez.container.size and other relevant settings. Ensure that they are appropriately configured for your cluster.

After making any configuration changes, restart the affected services (YARN, HiveServer2) for the changes to take effect.