Member since
09-11-2017
5
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6214 | 04-19-2018 08:55 AM |
04-25-2018
12:45 PM
Hi Elif, On my case problem is not using the correct ticket. I was exporting ticket everytime and after kinit it was able to get ticket but since time to time I was not using the latest process's ticket. ============================================================================ One example below : the output of hive.keytab [root@bdw1n07 sbilgic]# klist -k -t -e hive.keytab Keytab name: FILE:hive.keytab KVNO Timestamp Principal ---- ----------------- ---------------------------------------------------------------------------------------------------------------------- 13 02/27/18 08:58:51 hive/......................................@...................................... (aes256-cts-hmac-sha1-96) 13 02/27/18 08:58:51 hive/......................................@...................................... (aes128-cts-hmac-sha1-96) 13 02/27/18 08:58:51 hive/......................................@...................................... (des3-cbc-sha1) 13 02/27/18 08:58:51 hive/......................................@...................................... (arcfour-hmac) 13 02/27/18 08:58:51 hive/......................................@...................................... (des-hmac-sha1) 13 02/27/18 08:58:51 hive/......................................@I...................................... (des-cbc-md5) ============================================================================ Clearly, the hive.keytab above has not been generated by Cloudera Manager, instead, it has been created from kadmin or kadmin.local once that happens the keytab generated by Cloudera Manager fails with the checksum. I used a copy of hive.keytab generated from Cloudera Manager copying it from the process directory. ***Not that the command: kinit -kt /var/run/cloudera-scm-agent/process/`ls -1 /var/run/cloudera-scm-agent/process | grep HIVESERVER2 | sort -n | tail -1`/hive.keytab hive/$(hostname -f) kinit with the latest process directory for hive from /var/run/cloudera-scm-agent/process/ ***the latest process directory is collected with the command below: ls -ltr /var/run/cloudera-scm-agent/process/ | grep HIVESERVER2 ***Note that the hive.keytab under the process directory /var/run/cloudera-scm-agent/process/NNN-hive-HIVESERVER2/hive.keytab Has principals for hive and HTTP once the customer has configured HiveServer2 WebUI. So, if you are doing, do not export keytab from kadmin or kadmin.local, unless you are willing to configure Hive to use that keytab. Instead get a copy of the hive.keytab from the process directory: /var/run/cloudera-scm-agent/process/NNN-hive-HIVESERVER2/hive.keytab Please let me know if you have further questions.
... View more
04-23-2018
09:23 AM
Sadly this doesn't work for me- my Navigator DB doesn't have this table. But I'm not upgrading - I'm using oracle in the first place. Maybe it's another kind of installation error? Or do I need a software upgrade? $ rpm -qa | grep cloudera cloudera-manager-agent-5.12.1-1.cm5121.p0.6.el6.x86_64 cloudera-manager-daemons-5.12.1-1.cm5121.p0.6.el6.x86_64 cloudera-manager-server-5.12.1-1.cm5121.p0.6.el6.x86_64 3:33:19.872 PM ERROR SolrCore [qtp1384454980-63]: org.apache.solr.common.SolrException: Cursor functionality requires a sort containing a uniqueKey field tie breaker at org.apache.solr.search.CursorMark.<init>(CursorMark.java:104)
... View more
02-09-2018
05:39 AM
Hi @mlussana1, I have configured Kafka(2.2.0) with Sentry enabled in Kerberized environment and be able to use it as channel for Flume in CDH 5.13. First of all did you add the kafka to the Allowed Connecting Users(sentry.service.allow.connect) in sentry configuration?And in order to give privilege your user must be in one of the sentry admin groups which are listed in Admin Groups(sentry.service.admin.group) configuration.Those may cause the sentry shell problem. For the producer problem ,I am not sure but you may modify the jass.conf file as follows: KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=false useTicketCache=true serviceName="kafka" principal="username@xxxx.xxx"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=false useTicketCache=true serviceName="zookeeper" principal="username@xxxx.xxx"; }; And I run the consumer and producer from command line as follows: kafka-console-consumer --topic topicname --from-beginning --bootstrap-server brokerhostname:9092 --consumer.config consumer.properties kafka-console-producer --broker-list [brokers]:9092 --topic topicname --producer.config client.properties I hope these would help.
... View more