Member since
02-16-2016
89
Posts
24
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10057 | 05-14-2018 01:54 PM | |
1572 | 05-08-2018 05:07 PM | |
1121 | 05-08-2018 04:46 PM | |
2949 | 02-13-2018 08:53 PM | |
3538 | 11-09-2017 04:24 PM |
04-12-2023
08:56 PM
There might be the possible below cause. 1. If the script is running fine manually from your user, then maybe a problem with the binary path. Export the kinit binary path in the script. export PATH="/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/latest/bin:/usr/java/latest/jre/bin:/root/bin:/bin" 2. check the permission for the key tab file and the RWX permission for the user 3. If the above solution not works try to add the ticket generation command separately in corn for every 10 min and test the script. */10 * * * * kinit -kt /root/user.keytab user@PROD.EDH
... View more
02-07-2020
03:08 AM
Hello, By following the link shared by Umain, I have been able to install on my home laptop(Mac), because it install different images(sandbox-hdp & sandbox-proxy). But on my office network, sandbox-hdp-standalone image is available only(above images are not available- Not access to docker images available globally , We need to ask Admin for importing images on internal repo). Is it possible to make it work with sandbox-hdp-standalone docker image?, (Because with this image I am getting the same error mentioned in original post(container exited without error).Also I tried to run this container with debugging option, but no error there as well.)
... View more
01-15-2019
11:24 AM
First of all doublecheck all configurations (incl. password). Just to avoid moving in the right direction. Secondly confirm that you do not need TLS enabled. If these don't help, the following might help with troubleshooting: 1. Become nifi on the node where nifi is running 2. Send the message via Python 3. Share the python command here Note: Please explicity specify all things that you configure in nify when executing python (even if they are not needed because of good defaults for instance).
... View more
05-28-2018
05:10 PM
Have a look at this if you need to by pass Avro for Hive DDL: https://github.com/tspannhw/nifi-convertjsontoddl-processor If you need to convert JSON to ORC (for Hive) Avro will be required. You will need to write/manage Avro schemas (recommended will be to use Schema Registry for that). Alternatively you can use Infer Avro schema to detect incoming schema from JSON but it may not be 100% correct all the time.
... View more
06-12-2018
11:35 AM
@Shailesh Bhaskar If the answer addressed your question,Take a moment to Log in and Click on Accept button below to accept the answer, That would be great help to Community users to find solution quickly for these kind of issues and close this thread.
... View more
05-10-2018
07:05 AM
I tried and verified in my 10 node cluster. It worked perfectly.
... View more
02-14-2018
02:07 AM
I found we can configure Zookeeper Server address for Spark worker/slave nodes by setting spark.hadoop.hive.zookeeper.quorum at `Custom spark2-default`
... View more
02-13-2018
09:11 PM
Usually by default the ticket expires 24 hours and cache expires 7 days. Depends on your directory services policies. Within 7 days you can do kinit -R for users. klist will show ticket and cache expiry time. Or you can use keytabs to automate ticket renewal. You don't have to kinit for hadoop services (ever), ticket renewal is managed automatically.
... View more
11-13-2017
07:03 AM
@Umair Khan
Thanks for the template . Have another question ,suppose i have a threshold interval of 10 min but would like monitor only twice (10 min + 10 min)and not every 10 min ,how can that be done?
... View more
11-07-2017
07:25 PM
1 Kudo
Try stopping Nifi and purging everything within your provenance repository then start Nifi. Check nifi-app.log file for any provenance related events. Check if the user running Nifi process has access to read/write in set directory. I had a similar issue today instead my provenance implementation was set to Volatile, which I changed to WriteAhead. Also note, by default implementation is PersistentProvenanceRepository and if you have been changing implementations back and forth you will need to delete provenance data. (WriteAhead can read PersistentProvenanceRepository but not other way around).
... View more