Member since
08-10-2015
12
Posts
4
Kudos Received
0
Solutions
09-03-2015
05:59 AM
3 Kudos
Thanks for the support we just followed the below steps which we got from cloudera .com and the issue is now fixed http://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager/v5-1-x/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_enable_hue_sec_s10.html Troubleshooting the Kerberos Ticket Renewer: If the Hue Kerberos Ticket Renewer does not start, check your KDC configuration and the ticket renewal property, maxrenewlife, for the hue/<hostname> and krbtgt principals to ensure they are renewable. If not, running the following commands on the KDC will enable renewable tickets for these principals. kadmin.local: modprinc -maxrenewlife 90day krbtgt/YOUR_REALM.COM kadmin.local: modprinc -maxrenewlife 90day +allow_renewable hue/<hostname>@YOUR-REALM.COM
... View more
08-30-2015
10:06 PM
1 Kudo
please find the requested output for the below KRB5CCNAME=/tmp/hue_krb5_ccache klist -fe ============= [root@ngs-poc2 ~]# KRB5CCNAME=/tmp/hue_krb5_ccache klist -fe Ticket cache: FILE:/tmp/hue_krb5_ccache Default principal: hue/ngs-poc2.tcshydnextgen.com@TCSHYDNEXTGEN.COM Valid starting Expires Service principal 08/31/15 09:48:03 09/01/15 09:48:03 krbtgt/TCSHYDNEXTGEN.COM@TCSHYDNEXTGEN.COM renew until 08/31/15 09:48:03, Flags: FRI Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 [root@ngs-poc2 ~]# =============
... View more
08-27-2015
09:14 PM
thanks for the support mkazia I have regenerated the keys and restared the services but still the issue is not resolved Please find the sample output of getprinc for hue service kadmin.local: getprinc hue/ngs-poc1.tcshydnextgen.com@TCSHYDNEXTGEN.COM Principal: hue/ngs-poc1.tcshydnextgen.com@TCSHYDNEXTGEN.COM Expiration date: [never] Last password change: Fri Aug 28 08:42:05 IST 2015 Password expiration date: [none] Maximum ticket life: 1 day 00:00:00 Maximum renewable life: 5 days 00:00:00 Last modified: Fri Aug 28 08:42:05 IST 2015 (cloudera-scm/admin@TCSHYDNEXTGEN.COM) Last successful authentication: [never] Last failed authentication: [never] Failed password attempts: 0 Number of keys: 6 Key: vno 5, aes256-cts-hmac-sha1-96, no salt Key: vno 5, aes128-cts-hmac-sha1-96, no salt Key: vno 5, des3-cbc-sha1, no salt Key: vno 5, arcfour-hmac, no salt Key: vno 5, des-hmac-sha1, no salt Key: vno 5, des-cbc-md5, no salt MKey: vno 1 Attributes: Policy: [none] kadmin.local: Here i see the maximum renewal life is 5 days but i have configured as 7d in kdc.conf [root@ngs-poc1 init.d]# cat /var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] TCSHYDNEXTGEN.COM = { #master_key_type = aes256-cts max_renewable_life = 7d 0h 0m 0s acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal default_principal_flags = +renewable } max_life = 24h max_renewable_life = 7d
... View more
08-26-2015
10:39 PM
Thanks for the timely support Mkazia. The issue is still not resolved. As suggested we made the chnages in KDC.conf ====================================== [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] TCSHYDNEXTGEN.COM = { #master_key_type = aes256-cts max_renewable_life = 7d 0h 0m 0s acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal default_principal_flags = +renewable } max_life = 24h max_renewable_life = 7d =================================== After modification of KDC.conf file we have reasted the below service service krb5kdc restart service kadmin restart and restarted the Hue servcie from CM.
... View more
08-25-2015
04:11 AM
please find the krb5.conf configuration cat: /etc/krb5.: No such file or directory [root@ngs-poc1 ~]# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = TCSHYDNEXTGEN.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] TCSHYDNEXTGEN.COM = { kdc = ngs-poc1.tcshydnextgen.com admin_server = ngs-poc1.tcshydnextgen.com } [domain_realm] .tcshydnextgen.com = TCSHYDNEXTGEN.COM tcshydnextgen.com = TCSHYDNEXTGEN.COM
... View more
08-25-2015
04:09 AM
Renewing kerberos ticket to work around kerberos 1.8.1: /usr/bin/kinit -R -c /tmp/hue_krb5_ccache Aug 24, 2:43:16 PM ERROR kt_renewer Couldn't renew kerberos ticket in order to work around Kerberos 1.8.1 issue. Please check that the ticket for 'hue/ngs-poc2.tcshydnextgen.com@TCSHYDNEXTGEN.COM' is still renewable: $ kinit -f -c /tmp/hue_krb5_ccache If the 'renew until' date is the same as the 'valid starting' date, the ticket cannot be renewed. Please check your KDC configuration, and the ticket renewal policy (maxrenewlife) for the 'hue/ngs-poc2.tcshydnextgen.com@TCSHYDNEXTGEN.COM' and `krbtgt' principals.
... View more
Labels:
- Labels:
-
Cloudera Hue
-
Kerberos
08-18-2015
04:30 AM
Hi Wilfred, Many thanks for the reply. -bash-4.1$ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 100 Number of Maps = 10 Samples per Map = 100 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Starting Job 15/08/18 15:57:14 INFO client.RMProxy: Connecting to ResourceManager at hdp-poc2.tcshydnextgen.com/10.138.90.72:8032 15/08/18 15:57:14 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 40 for hdfs on ha-hdfs:nameservice1 15/08/18 15:57:14 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 40 for hdfs) 15/08/18 15:57:14 INFO input.FileInputFormat: Total input paths to process : 10 15/08/18 15:57:14 INFO mapreduce.JobSubmitter: number of splits:10 15/08/18 15:57:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1439544552504_0019 15/08/18 15:57:15 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 40 for hdfs) 15/08/18 15:57:15 INFO impl.YarnClientImpl: Submitted application application_1439544552504_0019 15/08/18 15:57:15 INFO mapreduce.Job: The url to track the job: http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439544552504_0019/ 15/08/18 15:57:15 INFO mapreduce.Job: Running job: job_1439544552504_0019 15/08/18 15:57:17 INFO mapreduce.Job: Job job_1439544552504_0019 running in uber mode : false 15/08/18 15:57:17 INFO mapreduce.Job: map 0% reduce 0% 15/08/18 15:57:17 INFO mapreduce.Job: Job job_1439544552504_0019 failed with state FAILED due to: Application application_1439544552504_0019 failed 2 times due to AM Container for appattempt_1439544552504_0019_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439544552504_0019/Then, click on links to logs of each attempt. Diagnostics: Application application_1439544552504_0019 initialization failed (exitCode=255) with output: Requested user hdfs is not whitelisted and has id 493,which is below the minimum allowed 1000 Failing this attempt. Failing the application. 15/08/18 15:57:17 INFO mapreduce.Job: Counters: 0 Job Finished in 3.528 seconds java.io.FileNotFoundException: File does not exist: hdfs://nameservice1/user/hdfs/QuasiMonteCarlo_1439893630990_2073090894/out/reduce-out at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1132) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1750) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1774) at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314) at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) -bash-4.1$ Note: I am learning Hadoop and I am a newbie to Hadoop. and as a part of learning I am executing various jobs. Here are the actions performed: 1) MR job has been executed with hdfs user and su command is used to login as hdfs user and kinit hdfs command has been executed before MR job command has been executed. Above error is observed. From the above it is clear that hdfs is not a whitelisted user as its uidNumber is < 1000. Is this whitelist error responsible for the hive query failure? and when hive query is executed with hdfs user then error2 is reported. If my observation is correct may I know why hive query result is not giving exact root cause of the failure i.e. white listed error message which is observed during MR job execution. This is not related to above query, but could you please provide answer to our query.. 2) With a local user a hive query is executed from hue and also from hive directly. without issuing kinit command hive query is successful from hue but not from hive. So, my question is: Are operations performed from hue are independent of kerberos security? If yes, may I know the reason for the same?
... View more
08-10-2015
05:51 AM
I am getting the below error after enbaling kerberos security in CDH 5.4.3. hive> select count(*) from hive_test; Query ID = hdfs_20150810165757_e7420efe-67e7-4a75-bf78-0d1383f7cc09 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1439195152382_0012, Tracking URL = http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439195152382_0012/ Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1439195152382_0012 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2015-08-10 16:57:19,271 Stage-1 map = 0%, reduce = 0% Ended Job = job_1439195152382_0012 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec
... View more