Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Can't get Master Kerberos principal for use as renewer

avatar
Expert Contributor

Hi, Cloudera Expects:

 

i have setup Kerberos soon before, everyting is ok right now except HIVE.  i am going to use hive in OS level to "show table" or " show database ", it appears errors like "Can't get Master Kerberos principal for use as renewer", but be remember, not everyhost appears this errors, there are just two host has this problems. the others host is normal.

 

 then i set DEBUG to trace the errors. total message like before:

 

[hdfs@namenode02 ~]$ hive -hiveconf hive.root.logger=DEBUG,console


14/09/30 10:20:01 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/09/30 10:20:01 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/09/30 10:20:01 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/09/30 10:20:01 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/09/30 10:20:01 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/09/30 10:20:01 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/09/30 10:20:01 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/09/30 10:20:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
14/09/30 10:20:02 DEBUG common.LogUtils: Using hive-site.xml found on CLASSPATH at /etc/hive/conf.cloudera.hive/hive-site.xml
14/09/30 10:20:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
14/09/30 10:20:02 INFO SessionState:
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
14/09/30 10:20:02 DEBUG parse.VariableSubstitution: Substitution is on: hive
14/09/30 10:20:02 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
14/09/30 10:20:02 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
14/09/30 10:20:02 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[GetGroups], about=, type=DEFAULT, always=false, sampleName=Ops)
14/09/30 10:20:02 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
14/09/30 10:20:02 DEBUG security.Groups: Creating new Groups object
14/09/30 10:20:02 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
14/09/30 10:20:02 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
14/09/30 10:20:02 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
14/09/30 10:20:02 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
14/09/30 10:20:02 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
14/09/30 10:20:02 DEBUG security.UserGroupInformation: hadoop login
14/09/30 10:20:02 DEBUG security.UserGroupInformation: hadoop login commit
14/09/30 10:20:02 DEBUG security.UserGroupInformation: using kerberos user:hdfs@DDS.COM
14/09/30 10:20:02 DEBUG security.UserGroupInformation: UGI loginUser:hdfs@DDS.COM (auth:KERBEROS)
14/09/30 10:20:02 DEBUG security.UserGroupInformation: Found tgt Ticket (hex) =
0000: 61 82 01 31 30 82 01 2D A0 03 02 01 05 A1 09 1B a..10..-........
0010: 07 44 44 53 2E 43 4F 4D A2 1C 30 1A A0 03 02 01 .DDS.COM..0.....
0020: 02 A1 13 30 11 1B 06 6B 72 62 74 67 74 1B 07 44 ...0...krbtgt..D
0030: 44 53 2E 43 4F 4D A3 81 FC 30 81 F9 A0 03 02 01 DS.COM...0......
0040: 11 A1 03 02 01 01 A2 81 EC 04 81 E9 43 A2 3C C7 ............C.<.
0050: 1C 31 98 F3 07 C1 AD 5F 83 F2 7E 3C 46 11 81 1A .1....._...<F...
0060: EA 89 16 43 9C 61 28 A0 75 18 8D 6C BE 12 9E FE ...C.a(.u..l....
0070: A8 D3 71 EB 60 5E 6F E1 A7 87 75 E3 27 E8 5F 66 ..q.`^o...u.'._f
0080: 18 E8 AE A3 CD 71 B2 A3 0B 8F A9 DF 50 38 12 56 .....q......P8.V
0090: 28 88 B4 87 96 B5 FD 5B 0A 1C 69 31 CA D7 9B F2 (......[..i1....
00A0: 5D 32 5A 00 34 44 54 46 B4 44 BB 21 C6 CD 00 1A ]2Z.4DTF.D.!....
00B0: CE 6D 9D 43 94 AC 17 F6 6A 82 A1 33 22 20 98 FA .m.C....j..3" ..
00C0: 80 90 A0 40 8D 72 22 45 D8 8E 0C AE 86 22 3E 9B ...@.r"E.....">.
00D0: 5A 70 8B C0 DF 8E 63 C4 F1 39 17 93 B9 0A E8 49 Zp....c..9.....I
00E0: 90 00 73 7B 5A FE 6C B2 B3 81 59 65 95 9A 3A CA ..s.Z.l...Ye..:.
00F0: BC 39 D9 0F 1A 67 B4 11 3C 14 7B A2 09 1B 06 F9 .9...g..<.......
0100: FC F4 83 A0 55 AD 40 E5 E3 2F DD EF 91 64 6B 13 ....U.@../...dk.
0110: C4 24 83 EE C4 E9 13 E1 4E B2 6E DB 07 E7 83 03 .$......N.n.....
0120: DA D4 C6 12 52 72 05 0F 9D E9 32 5E C9 C7 BC EA ....Rr....2^....
0130: D5 82 B8 D7 0B .....

Client Principal = hdfs@DDS.COM
Server Principal = krbtgt/DDS.COM@DDS.COM
Session Key = EncryptionKey: keyType=17 keyBytes (hex dump)=
0000: 40 94 BD 67 18 56 ED 6E 06 70 51 E1 2D 86 A0 A2 @..g.V.n.pQ.-...


Forwardable Ticket true
Forwarded Ticket false
Proxiable Ticket false
Proxy Ticket false
Postdated Ticket false
Renewable Ticket true
Initial Ticket true
Auth Time = Tue Sep 30 10:19:56 CST 2014
Start Time = Tue Sep 30 10:19:56 CST 2014
End Time = Wed Oct 01 10:19:56 CST 2014
Renew Till = Tue Oct 07 10:19:56 CST 2014
Client Addresses Null
14/09/30 10:20:02 DEBUG security.UserGroupInformation: Current time is 1412043602610
14/09/30 10:20:02 DEBUG security.UserGroupInformation: Next refresh is 1412112716000
14/09/30 10:20:02 DEBUG security.Groups: Returning fetched groups for 'hdfs'
14/09/30 10:20:02 DEBUG security.Groups: Returning cached groups for 'hdfs'
14/09/30 10:20:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

 

 

 

 

From the above message, we can sure the Kerberos works fine since the Kerberos ticket is ok and no any errors. then i am going to show table:

 

 

hive> show tables;
14/09/30 10:20:08 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 DEBUG parse.VariableSubstitution: Substitution is on: show tables
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO parse.ParseDriver: Parsing command: show tables
14/09/30 10:20:08 INFO parse.ParseDriver: Parse Completed
14/09/30 10:20:08 INFO log.PerfLogger: </PERFLOG method=parse start=1412043608258 end=1412043608484 duration=226 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 DEBUG nativeio.NativeIO: Initialized cache for IDs to User/Group mapping with a cache timeout of 14400 seconds.
14/09/30 10:20:08 INFO ql.Driver: Semantic Analysis Completed
14/09/30 10:20:08 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1412043608485 end=1412043608645 duration=160 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 DEBUG lazy.LazySimpleSerDe: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: columnNames=[tab_name] columnTypes=[string] separator=[[B@b11fcc2] nullstring= lastColumnTakesRest=false
14/09/30 10:20:08 INFO exec.ListSinkOperator: Initializing Self 0 OP
14/09/30 10:20:08 DEBUG lazy.LazySimpleSerDe: org.apache.hadoop.hive.serde2.DelimitedJSONSerDe initialized with: columnNames=[] columnTypes=[] separator=[[B@3d3cdc3b] nullstring= lastColumnTakesRest=false
14/09/30 10:20:08 INFO exec.ListSinkOperator: Operator 0 OP initialized
14/09/30 10:20:08 INFO exec.ListSinkOperator: Initialization Done 0 OP
14/09/30 10:20:08 DEBUG lazy.LazySimpleSerDe: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: columnNames=[tab_name] columnTypes=[string] separator=[[B@4190cb05] nullstring= lastColumnTakesRest=false
14/09/30 10:20:08 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
14/09/30 10:20:08 INFO log.PerfLogger: </PERFLOG method=compile start=1412043608226 end=1412043608736 duration=510 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO ql.Driver: Creating lock manager of type org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.1.0--1, built on 07/12/2014 13:39 GMT
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:host.name=namenode02.hadoop
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_55
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_55-cloudera/jre

.......................................................

message ignored.

...................................................

14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:user.name=hdfs
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:user.home=/var/lib/hadoop-hdfs
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Client environment:user.dir=/var/lib/hadoop-hdfs
14/09/30 10:20:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=cm-server.hadoop,datanode03.hadoop,datanode02.hadoop:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager$DummyWatcher@57a50790
14/09/30 10:20:08 DEBUG zookeeper.ClientCnxn: zookeeper.disableAutoWatchReset is false
14/09/30 10:20:08 INFO zookeeper.ClientCnxn: Opening socket connection to server datanode03.hadoop/10.32.87.49:2181. Will not attempt to authenticate using SASL (unknown error)
14/09/30 10:20:08 INFO zookeeper.ClientCnxn: Socket connection established to datanode03.hadoop/10.32.87.49:2181, initiating session
14/09/30 10:20:08 DEBUG zookeeper.ClientCnxn: Session establishment request sent on datanode03.hadoop/10.32.87.49:2181
14/09/30 10:20:08 INFO zookeeper.ClientCnxn: Session establishment complete on server datanode03.hadoop/10.32.87.49:2181, sessionid = 0x348c39df6340010, negotiated timeout = 60000
14/09/30 10:20:08 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x348c39df6340010, packet:: clientPath:null serverPath:null finished:false header:: 1,1 replyHeader:: 1,47244640411,-110 request:: '/hive_zookeeper_namespace_hive,,v{s{31,s{'world,'anyone}}},0 response::
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO log.PerfLogger: </PERFLOG method=acquireReadWriteLocks start=1412043608816 end=1412043608816 duration=0 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/09/30 10:20:08 INFO ql.Driver: Starting command: show tables
14/09/30 10:20:08 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1412043608226 end=1412043608820 duration=594 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO log.PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:08 INFO hive.metastore: Trying to connect to metastore with URI thrift://datanode01.hadoop:9083
14/09/30 10:20:08 DEBUG security.UserGroupInformation: PrivilegedAction as:hdfs@DDS.COM (auth:KERBEROS) from:org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
14/09/30 10:20:08 DEBUG transport.TSaslTransport: opening transport org.apache.thrift.transport.TSaslClientTransport@6742f991
14/09/30 10:20:09 DEBUG transport.TSaslClientTransport: Sending mechanism name GSSAPI and initial response of length 553
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Writing message with status START and payload length 6
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Writing message with status OK and payload length 553
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Start message handled
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Received message with status OK and payload length 108
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Writing message with status OK and payload length 0
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Received message with status OK and payload length 32
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Writing message with status COMPLETE and payload length 32
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Main negotiation loop complete
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: SASL Client receiving last message
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: Received message with status COMPLETE and payload length 0
14/09/30 10:20:09 INFO hive.metastore: Connected to metastore.
14/09/30 10:20:09 DEBUG transport.TSaslTransport: writing data length: 39
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: reading data length: 136
14/09/30 10:20:09 DEBUG transport.TSaslTransport: writing data length: 46
14/09/30 10:20:09 DEBUG transport.TSaslTransport: CLIENT: reading data length: 76
14/09/30 10:20:09 INFO log.PerfLogger: </PERFLOG method=task.DDL.Stage-0 start=1412043608820 end=1412043609187 duration=367 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:09 INFO log.PerfLogger: </PERFLOG method=runTasks start=1412043608820 end=1412043609187 duration=367 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:09 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1412043608816 end=1412043609187 duration=371 from=org.apache.hadoop.hive.ql.Driver>
OK
14/09/30 10:20:09 INFO ql.Driver: OK
14/09/30 10:20:09 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:09 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1412043609187 end=1412043609187 duration=0 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:09 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1412043608226 end=1412043609187 duration=961 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:09 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/09/30 10:20:28 DEBUG zookeeper.ClientCnxn: Got ping response for sessionid: 0x348c39df6340010 after 2ms
Failed with exception java.io.IOException:java.io.IOException: Can't get Master Kerberos principal for use as renewer
14/09/30 10:20:29 ERROR CliDriver: Failed with exception java.io.IOException:java.io.IOException: Can't get Master Kerberos principal for use as renewer
java.io.IOException: java.io.IOException: Can't get Master Kerberos principal for use as renewer
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:557)
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:495)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:139)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1578)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:280)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.io.IOException: Can't get Master Kerberos principal for use as renewer
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:202)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:386)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:521)
... 14 more

14/09/30 10:20:29 INFO exec.ListSinkOperator: 0 finished. closing...
14/09/30 10:20:29 INFO exec.ListSinkOperator: 0 forwarded 0 rows
Time taken: 21.008 seconds
14/09/30 10:20:29 INFO CliDriver: Time taken: 21.008 seconds
14/09/30 10:20:29 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:29 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1412043629232 end=1412043629232 duration=0 from=org.apache.hadoop.hive.ql.Driver>
14/09/30 10:20:29 DEBUG zookeeper.ZooKeeper: Closing session: 0x348c39df6340010
14/09/30 10:20:29 DEBUG zookeeper.ClientCnxn: Closing client for session: 0x348c39df6340010
14/09/30 10:20:29 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x348c39df6340010, packet:: clientPath:null serverPath:null finished:false header:: 2,-11 replyHeader:: 2,47244640412,0 request:: null response:: null
14/09/30 10:20:29 DEBUG zookeeper.ClientCnxn: Disconnecting client for session: 0x348c39df6340010
14/09/30 10:20:29 INFO zookeeper.ClientCnxn: EventThread shut down
14/09/30 10:20:29 INFO zookeeper.ZooKeeper: Session: 0x348c39df6340010 closed

 

 

 

 

 

10 REPLIES 10

avatar
Expert Contributor

sorry, forgot to say this is CDH5.1.0.

avatar
Expert Contributor

the strange thing is why just two hosts has this kind problems, but the others is OK.  i was using Cloudera manager to setup the Kerberos.

 

i have suffered this pain near one week, and can't find anything by GOOGLE, please help me , thanks.

avatar
Expert Contributor

i have got to resolve this problem. 

 

it's because HADOOP_YARN_HOME or HADOOP_MAPRED_HOME ENV.  if i set these two ENV as "*-0.20." manually while i invoked HIVE, then it's ok.

 

as the beginning, i am so strange why there are just two hosts have this kind problem, why others is ok. at last i found other hosts has namenode or resource manager, but these two hosts no any YARN releated nodes, then.

 

so i followed my assumption to install node manager in these two hosts, try again, it's fine.

 

 

CLOUDERA SUPPORTER,   could you explain why this kind situation occered ? i think which hosts to install YARN nodes depends requirement, we couldn't install YARN in every host, if these host didn't install YARN can't execute HIVE, i think this is UNFAIR.

avatar
New Contributor

please add yarn gateway roles to those two hosts and that should resolve the issue.

avatar
New Contributor

Hi,

 

I having the same issue. It looks like exactly the same issue as the client is running outside of Hadoop so obviously no NN or RM on the applcation node. Trying to access a Kerberized Hive cluster and run into 

 

(java.io.IOException) Can't get Master Kerberos principal for use as renewer

 

So does this mean I either have to install node manager on the server on which this application is running? the other suggested solution is to add Yarn gateway to this host. How do we do that?

 

Thanks,

avatar
New Contributor

How to add yarn gateway ? Can you please explain ?

avatar
New Contributor

@jijose Can you please tell us how to add yarn gateway ? 

avatar
Community Manager

Hi @prasha as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Contributor

@jijose If you are using Cloudera Manager, Login to Cloudera Manager UI > Click on "Cluster" > Click "YARN" > Actions > Add Role Instances. You will land on Assign Roles page.

Assign the host from where you want to run the job to Gateway role. Save the configuration and deploy the Client configuration.

 

You can try submitting the job from newly added host to Yarn Gateway role. Thanks