Member since
07-21-2016
101
Posts
10
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3936 | 02-15-2020 05:19 PM | |
71005 | 10-02-2017 08:22 PM | |
1532 | 09-28-2017 01:55 PM | |
1765 | 07-25-2016 04:09 PM |
10-02-2017
08:22 PM
@Robert Levas I found it...some job from some other scheduler was issuing kdestroy and it was taking away the cache. Thanks for your answers
... View more
09-28-2017
01:55 PM
Fount out that this is available in zkfc logs which will be present in the name node. For some reason these logs are not present in my name node boxes, so probably I need to restart the ZKFC services. Thanks Kumar
... View more
09-26-2017
04:07 PM
I need to know if there was any failover happened on name node services. Where can I get this information. Is there any REST API ?
... View more
Labels:
- Labels:
-
Apache Hadoop
09-25-2017
03:17 PM
Is there a way to find if any other process is destroying the ticket..may be some logs ?
... View more
09-21-2017
06:22 PM
@Robert Levas forgot to mention that this is sporadic. the script does bunch of HDFS operations. sudo -u user1 bash -c "kinit -R || kinit -k -t ~user1/user1.headless.keytab
... View more
09-21-2017
04:46 PM
I have "klist" written in front of all hdfs commands in my script. When the job starts, it says the credentials are present and valid for next few days. But immediately once the next hdfs command starts it says as follows: "klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_603)" [2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
[2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
[2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413)
[2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:563)
[2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378)
[2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732)
[2017-09-20 08:24:57,336] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728)
[2017-09-20 08:24:57,336] {bash_operator.py:74} INFO - at java.security.AccessController.doPrivileged(Native Method)
[2017-09-20 08:24:57,336] {bash_operator.py:74} INFO - at javax.security.auth.Subject.doAs(Subject.java:415) I am really not sure if any other process is corrupting the cache. Has anyone faced the same issue ?
... View more
Labels:
- Labels:
-
Apache Hadoop
05-18-2017
09:27 PM
@Vani I take it back......Actually restarting the ambari agent resolved the issue
... View more
05-18-2017
09:22 PM
This did not work
... View more
05-15-2017
09:52 PM
I am pretty sure disk1 is not mounted to the root partition. I unmounted and mounted back, but this alert is not going away. Has someone faced this issue before ? thanks Kumar
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop