Member since
07-07-2016
53
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2194 | 06-27-2016 10:00 PM |
01-04-2018
06:37 PM
@Karl Fredrickson Hi Karl..Same issue after Stop and restart. I tried 1 hour and 4 hours for Kerberos relogin period as I am using the same relogin period for FetchHDFS/ListHDFS. This is happening only for "GetHDFS". I am assuming "GetHDFS" processor is trying to delete/move or write which might need some other permissions. The HDFS files are owned by hive:hive with 771 permissions. With the same 771 permissions and hive:hive fetchhdfs & listhdfs is working. Thanks
... View more
01-04-2018
05:38 PM
Hi I am using FetchHDFS nifi processor which is running fine to fetch the exact HDFS file. I want to get all HDFS files under a directory hence using GetHDFS by keeping the source file option as "True". But I am getting a Kerberos error saying "ERROR [Timer-Driven Process Thread-1] o.apache.nifi.processors.hadoop.GetHDFS GetHDFS[id=XXXXXXXXXX] Error retrieving file hdfs://XXXXXXXXXXXXXXXXXXXX.0. from HDFS due to java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt): {}
java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:332)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:311)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287) I am wondering why Same Kerberos credentials are working for "FetchHDFS/ListHDFS" but not "GetHDFS". "GetHDFS" need additional setup? Please suggest. Thanks Srikaran
... View more
Labels:
- Labels:
-
Apache NiFi
12-06-2017
06:26 PM
1 Kudo
@Timothy Spann Thanks a lot. These are very helpful, Let me test the flow and will update accordingly. Thanks
... View more
12-06-2017
06:24 PM
1 Kudo
@anarasimham Looks like GetHDFS will replace HDFS file. I am planning to use fetchHDFS and then invoke http processor. For now I am converting avro file to JSON on Hadoop end and fetching the json and posting it. I will directly test avro & other formats and will update. Thanks!
... View more
12-04-2017
07:17 PM
1 Kudo
Hello. I have a HDFS file for which data needs to be posted to an outside URL (https), I have the user name and password for the URL; I can post a sample JSON via postman from my browser by using the user name and password. Now I have to use Ni-FI for this flow. Please let me know what are the exact nifi processors should I use to get the data from HDFS and post it into the URL via another ni-fi processor. Also kindly let me know what format the HDFS data should be in for these kind of use-cases. Thanks Srikaran
... View more
Labels:
- Labels:
-
Apache NiFi
09-14-2016
03:06 PM
@Predrag Minovic Great options. It looks like from all the options above the 2nd ZK quorum should be installed manually outside Ambari and configure the Kafka accordingly? If that's the case when I do upgrades in future on this cluster I have to take care of 2nd manual ZK quorum upgrade as a separate effort rt? And I like the 2 clusters solution but what if some business logic on cluster 1 is dependent on kafka on cluster 2? In that case I guess "2 clusters " solution will not work rt? Please confirm! Thanks Sri.
... View more
09-13-2016
07:10 PM
Hi
I am planning to build a HDP 2.4.2 Kerberized cluster via Ambari Blue-Prints and I am going to change the blue print to have 6 Zookeepers. The reason why I am having 6 zk's is I want to have two ZK quorums with 3 ZK's, 1 quorum I want to use for HDFS NN HA, Hbase and other services except for Kafka and for Kafka alone I want to have other ZK Quorum dedicated. I am assuming when I build the cluster with 6 ZK's initially I guess it will create only 1 ZK quorum with 6 ZK's in it. Can I change it to have to 2 ZK quorums after cluster installation from zkcli? or is there an option in Ambari blue print itself to create 2 ZK quorums with 3 ZK servers in each quorum? Please advice! Thanks
... View more
Labels:
- Labels:
-
Apache Kafka
06-27-2016
10:00 PM
@milind pandit
This is what I am giving for worker.childopts! Please correct if you see something weird.
... View more