Created on 01-07-2017 03:55 PM - edited 09-16-2022 03:53 AM
tpc-ds queries ran when using hive but failed to run with hive2. HDP version 2.5.3
With beeline under hive: [root@xxx sample-queries-tpcds]# /usr/hdp/2.5.3.0-38/hive/bin/beeline -u jdbc:hive2:///tpcds_bin_partitioned_orc_10000 -f query12.sql 2>&1 | tee query12.out Connecting to jdbc:hive2:///tpcds_bin_partitioned_orc_10000 Connected to: Apache Hive (version 1.2.1000.2.5.3.0-38) Driver: Hive JDBC (version 1.2.1000.2.5.3.0-38) ... 100 rows selected (32.14 seconds) With beeline under hive2: [root@xxxx sample-queries-tpcds]# /usr/hdp/2.5.3.0-38/hive2/bin/beeline -u jdbc:hive2:///tpcds_bin_partitioned_orc_10000 -f query12.sql 2>&1 | tee query12_hive2.out Connecting to jdbc:hive2:///tpcds_bin_partitioned_orc_10000 ... Connected to: Apache Hive (version 2.1.0.2.5.3.0-38) Driver: Hive JDBC (version 2.1.0.2.5.3.0-38) ... Status: Failed 17/01/06 13:43:04 [HiveServer2-Background-Pool: Thread-33]: ERROR SessionState: Status: Failed Vertex failed, vertexName=Map 5, vertexId=vertex_1483672287571_0146_1_00, diagnostics=[Vertex vertex_1483672287571_0146_1_00 [Map 5] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70) at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89) at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151) at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148) at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121) at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:3986) at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:204) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2818) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2765) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2747) at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59) at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1888) at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:203) at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2242) at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2228) at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183) at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68) ... 25 more Caused by: java.lang.IllegalStateException: org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProviderneeds at least 1 location to function at com.google.common.base.Preconditions.checkState(Preconditions.java:149) at org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider.<init>(HostAffinitySplitLocationProvider.java:51) at org.apache.hadoop.hive.ql.exec.tez.Utils.getSplitLocationProvider(Utils.java:52) at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:121) ... 30 more
Created 01-09-2017 08:17 PM
This error typically indicates that the LLAP daemons are not running. The error message does need to be improved.
What needs to be looked at here is why the LLAP daemons are not up. If they are, we can look at next steps.
More detail on the error: The client generates splits based on the number of instances up and running. If there's no instances, it's unable to generate splits and fails with an error indicating that there are 0 locations available. (HostAffinitySplitLocationProviderneeds at least 1 location to function at com.google.common.base.Preconditions.checkState(Preconditions.java:149))
Created 01-09-2017 11:59 PM
verified in ambari that hiveserver2 interactive was started. verified that llap daemon is running.
Same problem as before.
output of ps :
yarn 47364 47362 0 18:23 ? 00:00:00 /bin/bash -c python ./infra/agent/slider-agent/agent/main.py --label container_e20_1484001352838_0037_01_000011___LLAP --zk-quorum p264n11.pbm.ihost.com:2181,p264n01.pbm.ihost.com:2181,p264n02.pbm.ihost.com:2181 --zk-reg-path /registry/users/hive/services/org-apache-slider/llap0 > /hdd9/hadoop/yarn/log/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011/slider-agent.out 2>&1 yarn 47375 47364 0 18:23 ? 00:00:00 python ./infra/agent/slider-agent/agent/main.py --label container_e20_1484001352838_0037_01_000011___LLAP --zk-quorum p264n11.pbm.ihost.com:2181,p264n01.pbm.ihost.com:2181,p264n02.pbm.ihost.com:2181 --zk-reg-path /registry/users/hive/services/org-apache-slider/llap0 yarn 47416 1 7 18:23 ? 00:00:22 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b15.el7_2.ppc64le/bin/java -Dproc_llapdaemon -Xms164864m -Xmx164864m -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:MetaspaceSize=1024m -XX:InitiatingHeapOccupancyPercent=80 -XX:MaxGCPauseMillis=200 -server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+PrintGCDetails -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=4 -XX:GCLogFileSize=100M -Xloggc:/hdd9/hadoop/yarn/log/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011//gc.log -Djava.io.tmpdir=/hdd9/hadoop/yarn/local/usercache/hive/appcache/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011/tmp/ -Dlog4j.configurationFile=llap-daemon-log4j2.properties -Dllap.daemon.log.dir=/hdd9/hadoop/yarn/log/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011/ -Dllap.daemon.log.file=llap-daemon-hive-p264n01.pbm.ihost.com.log -Dllap.daemon.root.logger=RFA -Dllap.daemon.log.level=INFO -classpath /hdd9/hadoop/yarn/local/usercache/hive/appcache/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011/app/install//conf/:/hdd9/hadoop/yarn/local/usercache/hive/appcache/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011/app/install//lib/*:/hdd9/hadoop/yarn/local/usercache/hive/appcache/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011/app/install//lib/tez/*:/hdd9/hadoop/yarn/local/usercache/hive/appcache/application_1484001352838_0037/container_e20_1484001352838_0037_01_000011/app/install//lib/udfs/*:.: org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon ---------------------------------------------------------------------------------------------- /usr/hdp/2.5.3.0-38/hive2/bin/beeline -u jdbc:hive2:// which: no hbase in (/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/home/operf/oprofile_install/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-38/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-38/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2:// 17/01/09 18:30:37 [main]: WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist 17/01/09 18:30:37 [main]: WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value 17/01/09 18:30:38 [main]: WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist 17/01/09 18:30:38 [main]: WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value 17/01/09 18:30:42 [main]: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. 17/01/09 18:30:42 [main]: WARN metrics2.CodahaleMetrics: A Gauge with name [init_total_count_dbs] already exists. The old gauge will be overwritten, but this is not recommended 17/01/09 18:30:42 [main]: WARN metrics2.CodahaleMetrics: A Gauge with name [init_total_count_tables] already exists. The old gauge will be overwritten, but this is not recommended 17/01/09 18:30:42 [main]: WARN metrics2.CodahaleMetrics: A Gauge with name [init_total_count_partitions] already exists. The old gauge will be overwritten, but this is not recommended Connected to: Apache Hive (version 2.1.0.2.5.3.0-38) Driver: Hive JDBC (version 2.1.0.2.5.3.0-38) 17/01/09 18:30:42 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false. Transaction isolation: TRANSACTION_REPEATABLE_READ use tpcds_bin_partitioned_orc_10000; OK No rows affected (0.059 seconds) 0: jdbc:hive2://> source query12.sql; 17/01/09 18:33:12 [main]: WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist 17/01/09 18:33:12 [main]: WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value 17/01/09 18:33:13 [69d264fd-a751-41c5-8b90-ca0dfd20b436 main]: ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !! 17/01/09 18:33:15 [69d264fd-a751-41c5-8b90-ca0dfd20b436 main]: ERROR calcite.RelOptHiveTable: No Stats for tpcds_bin_partitioned_orc_10000@item, Columns: i_item_sk 17/01/09 18:33:15 [69d264fd-a751-41c5-8b90-ca0dfd20b436 main]: WARN parse.CalcitePlanner: Missing column stats (see previous messages), skipping join reordering in CBO Query ID = root_20170109183313_f1b39e2e-66fe-4b19-b7b2-a419b901956a Total jobs = 1 Launching Job 1 out of 1
Status: Failed 17/01/09 18:33:24 [HiveServer2-Background-Pool: Thread-51]: ERROR SessionState: Status: Failed Vertex failed, vertexName=Map 5, vertexId=vertex_1484001352838_0070_1_00, diagnostics=[Vertex vertex_1484001352838_0070_1_00 [Map 5] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70) at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89) at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151) at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148) at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121) at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:3986) at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:204) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2818) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2765) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2747) at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59) at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1888) at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:203) at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2242) at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2228) at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183) at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68) ... 25 more Caused by: java.lang.IllegalStateException: org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProviderneeds at least 1 location to function at com.google.common.base.Preconditions.checkState(Preconditions.java:149) at org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider.<init>(HostAffinitySplitLocationProvider.java:51) at org.apache.hadoop.hive.ql.exec.tez.Utils.getSplitLocationProvider(Utils.java:52) at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:121) ... 30 more
Created 01-10-2017 01:22 PM
Hi @sseth
Thanks for the information.
Even I am facing the same issue. I was further debugging the issue, seems llap is running and I have some more logs for you.
Below is registry json from LlapRegistryService.getClient().
.
Service LlapRegistryService in state LlapRegistryService: STARTED { "amInfo" : { "appName" : "llap0", "appType" : "org-apache-slider", "appId" : "application_1483597273108_0102", "containerId" : "container_e10_1483597273108_0102_01_000001", "hostname" : "pts00452-vm5.persistent.com", "amWebUrl" : "http://pts00452-vm5.persistent.com:36813/" }, "state" : "RUNNING_ALL", "originalConfigurationPath" : "hdfs://pts00452-vm5.persistent.com:8020/user/hive/.slider/cluster/llap0/snapshot", "generatedConfigurationPath" : "hdfs://pts00452-vm5.persistent.com:8020/user/hive/.slider/cluster/llap0/generated", "desiredInstances" : 1, "liveInstances" : 1, "appStartTime" : 1484052291830, "llapInstances" : [ { "hostname" : "pts00452-vm5.persistent.com", "containerId" : "container_e10_1483597273108_0102_01_000002", "statusUrl" : "http://pts00452-vm5.persistent.com:15002/status", "webUrl" : "http://pts00452-vm5.persistent.com:15002", "rpcPort" : 15001, "mgmtPort" : 15004, "shufflePort" : 15551 } ] }', 'SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-38/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-38/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] INFO cli.LlapStatusServiceDriver: LLAP status invoked with arguments = --hiveconf INFO conf.HiveConf: Found configuration file file:/etc/hive2/2.5.3.0-38/0/conf.server/hive-site.xml WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value INFO impl.TimelineClientImpl: Timeline service address: http://pts00452-vm5.persistent.com:8188/ws/v1/timeline/ INFO client.RMProxy: Connecting to ResourceManager at pts00452-vm5.persistent.com/10.88.67.162:8050 INFO client.AHSProxy: Connecting to Application History server at pts00452-vm5.persistent.com/10.88.67.162:10200 WARN curator.CuratorZookeeperClient: session timeout [10000] is less than connection timeout [15000] INFO impl.LlapZookeeperRegistryImpl: Llap Zookeeper Registry is enabled with registryid: llap0 INFO impl.LlapRegistryService: Using LLAP registry type org.apache.hadoop.hive.llap.registry.impl.LlapZookeeperRegistryImpl@3e48d38 INFO impl.LlapZookeeperRegistryImpl: UGI security is not enabled, or non-daemon environment. Skipping setting up ZK auth. INFO imps.CuratorFrameworkImpl: Starting INFO impl.LlapRegistryService: Using LLAP registry (client) type: Service LlapRegistryService in state LlapRegistryService: STARTED INFO state.ConnectionStateManager: State change: CONNECTED INFO cli.LlapStatusServiceDriver: LLAP status finished')
Created 01-10-2017 03:40 PM
For beeline, can you "set hive.llap.client.consistent.splits=false;" and run your query.
Created 01-11-2017 04:48 AM
The problem was that root user was being used to run the queries. Problem was solved when user was switched to hive.
Created 05-11-2018 10:06 AM
Hi. I had the same issue, and setting
hive.llap.client.consistent.splits to false make it working. However, I had to turn
set hive.llap.execution.mode=none; to make it working. AFAIK i am now using hive2.10 without llap, but with doas