Created on 01-01-2017 01:25 AM
[hive@sumeshhdpn2 root]$ hadoop jar /usr/hdp/2.2.4.2-2/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 1 10 Number of Maps = 1 Samples per Map = 10 Wrote input for Map #0 Starting Job 16/08/15 04:12:47 INFO impl.TimelineClientImpl: Timeline service address: http://sumeshhdpn2:8188/ws/v1/timeline/ 16/08/15 04:12:47 INFO client.RMProxy: Connecting to ResourceManager at sumeshhdpn2/172.25.16.48:8050 16/08/15 04:12:48 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 129 for hive on ha-hdfs:sumeshhdp 16/08/15 04:12:48 INFO security.TokenCache: Got dt for hdfs://sumeshhdp; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:sumeshhdp, Ident: (HDFS_DELEGATION_TOKEN token 129 for hive) 16/08/15 04:12:53 INFO input.FileInputFormat: Total input paths to process : 1 16/08/15 04:12:56 INFO mapreduce.JobSubmitter: number of splits:1 16/08/15 04:12:58 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1470233265007_0006 16/08/15 04:12:58 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:sumeshhdp, Ident: (HDFS_DELEGATION_TOKEN token 129 for hive) 16/08/15 04:13:03 INFO impl.YarnClientImpl: Submitted application application_1470233265007_0006 16/08/15 04:13:04 INFO mapreduce.Job: The url to track the job: http://sumeshhdpn2:8088/proxy/application_1470233265007_0006/ 16/08/15 04:13:04 INFO mapreduce.Job: Running job: job_1470233265007_0006 16/08/15 04:13:04 INFO mapreduce.Job: Job job_1470233265007_0006 running in uber mode : false 16/08/15 04:13:04 INFO mapreduce.Job: map 0% reduce 0% 16/08/15 04:13:04 INFO mapreduce.Job: Job job_1470233265007_0006 failed with state FAILED due to: Application application_1470233265007_0006 submitted by user hive to non-leaf queue: default 16/08/15 04:13:04 INFO mapreduce.Job: Counters: 0 Job Finished in 17.775 seconds
[hive@sumeshhdpn3 root]$ hive Logging initialized using configuration in file:/etc/hive/conf/hive-log4j.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.2-2/hive/lib/hive-jdbc-0.14.0.2.2.4.2-2-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Exception in thread "main" java.lang.RuntimeException: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1470233265007_0007 submitted by user hive to non-leaf queue: default at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:457) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:672) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1470233265007_0007 submitted by user hive to non-leaf queue: default at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:612) at org.apache.tez.client.TezClient.preWarm(TezClient.java:585) at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:200) at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454) ... 8 moreROOT CAUSE: This is triggered when requestor / user create a queue under the "default" queue. The queue "default" should not have any child queue created under. An example of this would be as below: Here we have a queue called "default.test" queue created under "default" queue. Due to this when we submit any mapreduce job or hive shell, we get the exceptions as shown above. RESOLUTION / WORKAROUND: To address this issue, remove the queue under the default queue. If a queue needs to be created, we can create a queue under "root" and assign the required resources for it.