Member since
06-19-2017
62
Posts
1
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1616 | 03-17-2022 10:37 AM | |
1159 | 12-10-2021 04:25 AM | |
1765 | 08-18-2021 02:20 PM | |
4392 | 07-16-2021 08:41 AM | |
690 | 07-13-2021 07:03 AM |
10-24-2023
09:40 PM
Hi @adhishankarit Spark applications will run using spark engine and not a tez engine unlike hive. You no need to set any engine from spark side. If you want to run hive queries then you can set engines like Tez, Spark, MR
... View more
07-12-2022
03:08 PM
@Chakkara As far as i remember , distributed cache is not having consistency . You could use Hbase or HDFS for storing the status of success or failure of the processors for downstream application. Once you saved the Success and Failure at Hbase . You can retrieve it from the Hbase processor using the row ID. Build a REST API NiFi flow to pull the status from Hbase for example HandleHTTPRequest --> FetchHbaseRow - HandleHTTPResponse You can call the HTTP API (Request and Response) via shell script/curl and call the script from Control-M.
... View more
04-05-2022
12:44 AM
Even this is old thread updating . add java-home/bin/keytool -genkey -alias server-alias -keyalg RSA -keypass changeit
-storepass changeit -keystore keystore.jks java-home/bin/keytool -export -alias server-alias -storepass changeit
-file server.cer -keystore keystore.jks This need to be configured on NiFI instance.
... View more
03-19-2022
10:16 PM
Thank you @adhishankarit for sharing this with us. This will be useful for other person as well. Many reader will get benefit from this.
... View more
12-10-2021
04:25 AM
Hi , Please find the sample flow for List SFTPand Fetch SFTP processor and put into target HDFS path. 1. Processor ListSFTP - Keep listening input folder for example /opt/landing/project/data from Fileshare server. Once a new file arrival , the listsftp takes only name of the file and pass to FetchSFTP nifi processor to fetch the file from source folder. Properties to mention in ListSFTP processor are highlighted below 2. Once latest file has been idenified by ListSFTP processor , the fetchSFTP processor to fetch the file from Source path. Properties to configure in FetchSFTP processor. 3. In PUTHDFS processor , please configure the highlighted values of your project and required folder. If your cluster is kerberos enabled , please add the kerbers controller service to access HDFS from NiFi. 4. Success and failure relationship of the PutHDFS nifi processor can be used to monitor the Flow status and status can stored in Hbase for queering flow status.
... View more
12-02-2021
10:05 AM
@Griggsy If you can share a source json, it may help with more specific guidance in the community. Thanks, Matt
... View more
10-23-2021
12:23 AM
Hi , Could you please check the user is having the permission to trigger Oozie in Ranger policy (OR) also please check your Oozie workflow xml file is present in HDFS path once .. Normal Basic Auth is fine for accessing Oozie REST APIs ..I am able to perform POST and GET request of Oozie workflow successfully and monitor the status of the workflow in the same script .
... View more
10-14-2021
02:02 AM
Hi @ShankerSharma , Thank you for the confirmation . Yes. I mentioned one of the working drop partition query in the post. We were in situation to use the functions inside drop partition clause . We will adopt the 14days calculation in script and pass the value to DROP partition statement.
... View more
10-13-2021
05:47 AM
Hi @arunek95 Yes,the workaround has been applied by following the community posts. As of now .we don't have any root-cause why many files were in OPENFORWRITE state for particular two days in our cluster. https://community.cloudera.com/t5/Support-Questions/Cannot-obtain-block-length-for-LocatedBlock/td-p/117517 Thanks
... View more
09-18-2021
10:00 PM
Hi @Daggers You can store avro schema file in HDFS folder and point that folder for your hive table ..There is avro.schema.url property with value can be passed while Hive creating the table ..This solution works for versioning the avro schema file in HDFS for the respective table. You can explore schema registry as well
... View more