Member since
12-28-2015
74
Posts
17
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1440 | 05-17-2017 03:15 PM | |
5794 | 03-21-2017 11:35 AM | |
13290 | 03-04-2017 09:51 AM | |
2111 | 02-09-2017 04:03 PM | |
3515 | 01-19-2017 11:24 AM |
05-24-2022
07:45 AM
Yes we have the same problem. For now only the HiveServer2 restart help.
... View more
09-14-2017
07:49 AM
Thanks @Matt Clarke I added that entry because I had previous issues with the LDAP admin user, now I understand better how it works. I just removed the "Legacy Authorized Users File" value and it works.
... View more
09-12-2017
01:43 PM
@Juan Manuel Nieto NiFi must be configured to run securely over https using SSL before any user authentication can be used. Thanks, Matt
... View more
06-05-2017
09:22 AM
Thanks @yvora , I had seen that before I just didn't know why isn't documented in https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_reference/content/hdfs-ports.html
... View more
05-20-2017
12:30 AM
You can upload your keytab file to workflow lib folder so that the keytab will copy to the container folder no matter the job is running on which nodemanager. Then you can specify the --keytab your-keytab --principal your-principal in your spark-submit command. But you have to upload the update keytab to workflow lib folder every time you change the password.
... View more
05-17-2017
03:15 PM
Hello @Vipin Rathor, An apology for the delay in the answer, finally I solved it, as you said the problem with the replay was that he was trying to authenticate multiple times in a very short time, this was caused by curl and the -L parameter, for some reason curl wasn't storing the session cookie, I fixed it using -c <file path> -b <file path> parameter to store the cookie. Thank you.
... View more
01-12-2018
02:16 PM
@Juan Manuel Nieto I am not sure if this question has been resolved by this point. However running the following command will kill the specific Oozie job: oozie job -oozie http://hostname:port/oozie/ -kill jobID I hope this helps you!
... View more
03-22-2017
08:24 AM
Hi @Juan Manuel Nieto, well done! I noticed AMBARI-18898 and suspected it's causing a havoc on the command line, but didn't have time to try it. Though, now, after fixing Solr, Ranger audit cannot connect to it and Ambari is showing "http 500" (false) alerts on both Infra Solr instances. Edit: I missed "DEFAULT" in the name rules, omitted as I tried with only one rule before. After adding DEFAULT everything is back to normal!
... View more
03-14-2017
04:36 PM
Hi @Deepesh , that was my problem, by default it takes /etc/hive/conf/ when I tried to use the conf.server directory also forgot the export and just declared a local val that's why it failed when I tried it.
Thank you.
... View more
03-14-2017
04:21 PM
Hello @Saurabh, If you look the error message closely, it says 'No service creds'. Since you are running hadoop command, this most probably means that the NameNode service keytab is either missing or not good. For both the cases, please check NameNode log for any error during service startup. To verify the service keytabs, try running these on NameNode: su - hdfs
kinit -kt /etc/security/keytabs/nn.service.keytab nn/<nn-host-fqdn>@REALM
The last command should give you a correct TGT for NN service principal, that would show that NN service keytab is good. Lastly, you can try to regenerate the keytabs for all the services. Hope this helps !
... View more