Member since
07-09-2019
13
Posts
0
Kudos Received
0
Solutions
10-04-2019
12:02 PM
I am having the same error but I didn't understand the solution, can you please explain it. Thank you.
... View more
08-31-2019
10:22 AM
Thanks for replying @Shelton @EricL .
We only have MIT Kerberos and doesn't have any Active Directory.
These are the outputs, we have two KDC setup for each cluster but they are not replicating to each other. We have one more cluster with same REALM NAME but for them also there are two KDC but there is no replication happening. Not only Hive service, even if I want to install extra Node Manager I am getting the same error.
[root@spectra-xx-z15p xxxxxxx]# klist -kt /etc/security/keytabs/hive.service.keytabKeytab name: FILE:/etc/security/keytabs/hive.service.keytab
klist: Key table file '/etc/security/keytabs/hive.service.keytab' not found while starting keytab scan
[root@spectra-xx-z15p xxxxxxx]# cat /etc/krb5.conf
# Other applications require this directory to perform krb5 configuration.
includedir /etc/krb5.conf.d/
# This file is provided by the CADA client package
# Previous versions of this file can be found in /opt/cada/backups/
# $Id: krb5.conf 10925 2010-05-14 19:55:23Z xxxxxxx $
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = SPECTRA.XXXXXXX.NET
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = yes
renew_lifetime = 180d
[realms]
SPECTRA.XXXXXXX.NET = {
kdc = spectra-xx-z39p.sys.xxxxxxx.net
kdc = spectra-xx-z40p.sys.xxxxxxx.net
admin_server = spectra-po-z39p.sys.xxxxxxx.net
}
XXXXXXX.NET = {
kdc = kdc-m.xxxxxxx.net:88
kdc = kdc.xxxxxxx.net:88
admin_server = kdc-m.xxxxxxx.net:749
}
[domain_realm]
.xxxxxxx.net = XXXXXXX.NET
xxxxxxx.net = XXXXXXX.NET
.xxxxxxx.com = XXXXXXX.NET
xxxxxxx.com = XXXXXXX.NET
.sys.xxxxxxx.net = SPECTRA.XXXXXXX.NET
sys.xxxxxx.net = SPECTRA.xxxxxx.NET
[appdefaults]
pam = {
debug = false
forwardable = true
krb4_convert = false
chpw_prompt = sshd
}
pkinit = {
allow_pkinit = false
}
Below are the hive.keytab outputs from hive metastore and hive server.
[root@spectra-xx-z15p process]# cd /var/run/cloudera-scm-agent/process/17710-hive-HIVEMETXXTORE/
[root@spectra-xx-z15p 17710-hive-HIVEMETASTORE]# ls
cloudera-monitor.properties core-site.xml hive.keytab hive-site.xml process_timestamp sentry-site.xml yarn-conf
cloudera-stack-monitor.properties creds.localjceks hive-log4j.properties logs redaction-rules.json service-metrics.properties
[root@spectra-xx-z15p 17710-hive-HIVEMETASTORE]# klist -kt hive.keytab
Keytab name: FILE:hive.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
[root@spectra-xx-z15p 17710-hive-HIVEMETASTORE]# cd /var/run/cloudera-scm-agent/process/17709-hive-HIVESERVER2/
[root@spectra-xx-z15p 17709-hive-HIVESERVER2]# ls
cloudera-monitor.properties hive.keytab logs process_timestamp service-metrics.properties
cloudera-stack-monitor.properties hive-log4j.properties navigator.client.properties redaction-rules.json yarn-conf
core-site.xml hive-site.xml navigator.lineage.client.properties sentry-site.xml
[root@spectra-xx-z15p 17709-hive-HIVESERVER2]# klist -kt hive.keytab
Keytab name: FILE:hive.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 hive/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
2 08/29/2019 16:57:43 HTTP/spectra-xx-z15p.sys.xxxxxxx.net@SPECTRA.XXXXXXX.NET
[root@spectra-xx-z15p 17709-hive-HIVESERVER2]#
... View more
08-30-2019
07:20 AM
Hive Service won't start (HiveMetaStore [main]: org.apache.thrift.transport.TTransportException: java.io.IOException: Login failure for hive/xxxx.sys.xxxx.net@REALM.NET from keytab hive.keytab: javax.security.auth.login.LoginException: Client not found in Kerberos database )
HiveMetaStore
[main]: org.apache.thrift.transport.TTransportException: java.io.IOException: Login failure for hive/shive/xxxx.sys.xxxx.net@REALM.NET from keytab hive.keytab: javax.security.auth.login.LoginException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server.<init>(HadoopThriftAuthBridge.java:358)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge.createServer(HadoopThriftAuthBridge.java:102)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6138)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6057)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: Login failure for hive/spectra-as-z15p.sys.comcast.net@SPECTRA.COMCAST.NET from keytab hive.keytab: javax.security.auth.login.LoginException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:962)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server.<init>(HadoopThriftAuthBridge.java:353)
... 9 more
Caused by: javax.security.auth.login.LoginException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:953)
... 10 more
Caused by: KrbException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:82)
at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:316)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:776)
... 23 more
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.ASRep.init(ASRep.java:64)
at sun.security.krb5.internal.ASRep.<init>(ASRep.java:59)
at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:60)
... 26 more
... View more
Labels:
- Labels:
-
Apache Hive
08-21-2019
05:05 PM
I am getting the below error continuously in my logs, can you help me in solving the following error. spectra.xxxx.sys.xxxxx.net:1004:DataXceiver error processing REQUEST_SHORT_CIRCUIT_FDS operation src: unix:/var/run/hdfs-sockets/dn dst: <local>
org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with block_token_identifier (expiryDate=1566429148134, keyId=-1119489055, userId=hbase, blockPoolId=BP-1603344558-10.146.65.4-1468089654785, blockId=1815159924, access modes=[READ]) is expired.
at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
at org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1289)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitFds(DataXceiver.java:295)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitFds(Receiver.java:219)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:121)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Manager
07-09-2019
12:56 PM
Hello @bgooley I am talking about /var/lib/cloudera-host-monitor and /var/lib/cloudera-service-monitor. They occupied almost 20GB of the space and I want to delete those files. By below screenshot you will know what these folders contain. I need space, if charts history is gone I am fine. Thank you.
... View more
07-09-2019
12:24 PM
Hello @bgooley I am talking about /var/lib/cloudera-service-monitor and /var/lib/clouder-host-monitor, both of these occupied almost 20GB, I want to delete the files in both of them. But not sure what will happen. Its a production cluster, so dont want impact on it by my actions. cloudera-service-monitor has impala, reports, subject_record, ts and yarn folders. I want to delete the files in subject_record and ts floders, because they occupied 9GB. Is that ok to delete them? The ts folder has below files: stream ts_stream_rollup_PT21600S ts_stream_rollup_PT600S ts_stream_rollup_PT86400S ts_type_rollup_PT3600S ts_type_rollup_PT604800S type ts_entity_metadata ts_stream_rollup_PT3600S ts_stream_rollup_PT604800S ts_type_rollup_PT21600S ts_type_rollup_PT600S ts_type_rollup_PT86400S The subject_record folder has below files: subject_ts ts_subject The cloudera-host-monitor has below floders: subject_record ts Like in cloudera-service-monitor, almost same files are present in both the subject_record and ts floders. Is that ok to manually delete them? if chart data is gone it is fine but we need space.
... View more
07-09-2019
11:06 AM
Hi, I want to delete the /var/lib/ cloudera-service/host-monitor files. But when I search everyone is saying its not best to delete these files. But 90% of space is used by them. I know I will loose the chart data history, that wont be a problem. If I delete them, will my production cluster can work normally like before?If not what errors will I get? We are using CDH 5.10.0 parcles. I need your insight on this issue.
... View more
Labels:
- Labels:
-
Cloudera Manager