Member since
07-15-2016
16
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2601 | 09-06-2016 06:10 AM |
10-17-2016
10:32 PM
The user does have permission, when I run klist before and after calling my script I find a valid ticket which means that the cron job was able read the keytab file. I used the link to be able to call multiple commands in the same cron job line. It still does not explain why am I having this error I am afraid 😞
... View more
10-17-2016
04:54 AM
I have a script that should run in a cron job and should be authenticated with hdfs user through kerberos. To run the script outside the cron job, from the shell, I execute the following commands: sudo -i
kinit -V -k -t /etc/security/keytabs/hdfs.headless.keytab hdfs
callMyScriptWithParams
The above commands execute as I needed them to. However, when I call the same set of commands in a cron job, I get the following error javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I get the same error if I tried to run the same commands from the shell but not through (Using my current user) the root as below sudo kinit -V -k -t /etc/security/keytabs/hdfs.headless.keytab hdfs #works fine
sudo callMyScriptWithParams #throw the error
I tried to create several versions of the cron job, one of them is below (Runs every three minutes for testing purposes) */3 * * * * root sudo -i; kinit -V -kt /etc/security/keytabs/hdfs.headless.keytab hdfs; klist; callMyScriptWithParams; klist
I am calling 'klist' to check that I am getting the correct ticket. Klist returen the hdfs user ticket before and after calling my script. Since I have a valid ticket, I am not sure why am I getting the above error. Below is the output when I obtain the ticket: Using default cache: /run/user/krb5cc/krb5cc_0
Using principal: hdfs@MyRealm
Using keytab: /etc/security/keytabs/hdfs.headless.keytab
Authenticated to Kerberos v5
and this is an example of a retrieved ticket from 'klist' Ticket cache: FILE:/run/user/krb5cc/krb5cc_0
Default principal: hdfs@MyRealm
Valid starting Expires Service principal
10/17/2016 15:12:01 10/18/2016 15:12:01 krbtgt/MyRealm@MyRealm
If I am retrieving a valid ticket before and after calling myscript, then why am I getting 'Failed to find any Kerberos tgt' error when I call the script? specially that I called the same commands outside the cron job and they worked fine. P.S. I tried to cron job without the 'sudo -i' as well but I am still getting the same error.
... View more
10-16-2016
11:23 PM
I think this file is generated by ambari. Probably that is why the owner is root.
... View more
10-16-2016
11:21 PM
You were right! I assumed by default that the user is hdfs while it had a different name in the keytab file. Thanks kuldeep!
... View more
10-14-2016
02:54 AM
2 Kudos
I need to create hdfs-auto-snapshot using the hdfs user. My environment is kerberos-authenticated, so, to do that I called the following command to obtain a kerberos ticket for the hdfs user:
kinit -V -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
That command threw the following error:
Using default cache: /run/user/krb5cc/krb5cc_MyUserID
Using principal: hdfs@MyRealm
Using keytab: /etc/security/keytabs/hdfs.headless.keytab
kinit: Password has expired while getting initial credentials
When I try to use sudo in the command to be
sudo kinit -V -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
I get the following error
Using default cache: /run/user/krb5cc/krb5cc_0
Using principal: hdfs@MyRealm
Using keytab: /etc/security/keytabs/hdfs.headless.keytab
kinit: Keytab contains no suitable keys for hdfs@MyRealm while getting initial credentials
The reason I though I may need to use sudo is because the keytab file has permission "-r--r-----" and root is the owner.
Any idea how can I obtain a tgt for hdfs user so that I can use it later?
... View more
Labels:
- Labels:
-
Apache Hadoop
09-06-2016
06:10 AM
turned out it is because hive by default creates the tables in ORC format and hive-testbench assumes that the default tables is in text format. I had to change the script in hive-testbench/ddl-tpcds/text/alltable.sql to be STORED AS TEXTFILE.
... View more
09-06-2016
04:57 AM
I also tried to query the tables in tpcds_text_10 before generating the tables in tpcds_bin_partitioned_orc_10 and they through the same error. but that could make sense because they are originally created in text format and then changed to ord after that as per my understanding from the scripts
... View more
09-06-2016
12:06 AM
I tried with 10GB, I have enough space but I am still getting the same error
... View more
09-05-2016
06:23 AM
I am working on setting and configuring hive_testbench. I applied all the required steps for the configurations but whenever I try to generate the data, I get the following exception: Caused by: java.lang.RuntimeException: java.io.IOException: org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://mycluster/tmp/tpcds-generate/100/date_dim/data-m-00099. Invalid postscript.
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:196)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:135)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80)
at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650)
at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621)
at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145)
at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:408)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:128)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149)
... 14 more
Caused by: java.io.IOException: org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://mycluster/tmp/tpcds-generate/100/date_dim/data-m-00099. Invalid postscript.
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:253)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193)
... 25 more
Caused by: org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://mycluster/tmp/tpcds-generate/100/date_dim/data-m-00099. Invalid postscript.
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.ensureOrcFooter(ReaderImpl.java:251)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:376)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:317)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238)
at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat.getRecordReader(VectorizedOrcInputFormat.java:175)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createVectorizedReader(OrcInputFormat.java:1239)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1252)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251)
... 26 more
Also,tpcds_bin_partitioned_orc_100 DB is generated but remains empty due to these errors (i.e. no tables). I tried generating the data by only calling the script, and I tried running it with the FORMAT=textfile and format=orc options but I still get the same error. Any idea how can I resolve this and generate the data in tpcds_bin_partitioned_orc_100 DB?
... View more
Labels:
- Labels:
-
Apache Hive
07-25-2016
11:20 PM
I have a group that is synchronised from LDAP. The users in this group are allowed to login to Ranger Portal UI with a 'User' Role. I want to assign admin role to all users of this group. Can I do that to the group without having to change the role of the users one by one?
... View more
Labels:
- Labels:
-
Apache Ranger