Member since
06-19-2017
62
Posts
1
Kudos Received
7
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4873 | 03-17-2022 10:37 AM | |
| 3077 | 12-10-2021 04:25 AM | |
| 3348 | 08-18-2021 02:20 PM | |
| 8542 | 07-16-2021 08:41 AM | |
| 1853 | 07-13-2021 07:03 AM |
12-29-2025
05:18 AM
@adhishankarit @VR46 @zaun : i think this is also relevant to previous issue. After getting valid kerberos ticket for the user still facing the same , Knox SSO is enforced here. When using: # curl -k -u "k164prda:ep8gv=rG" https://sitlxdvdlap099.saibsit.com:8443/gateway/knoxsso/api/v1/websso -> Gives no result # curl -k -u k164prda https://sitlxdvdlap099.saibsit.com:8443/gateway/knoxsso/api/v1/token -> Gives no result # curl -k -v --negotiate -u : https://sitlxdvdlap099.saibsit.com:8443/gateway/cdp-proxy/oozie/v2/admin/status HTTP/1.1 302 Found 1-What exact Cloudera Manager configurations are required to enable Kerberos (SPNEGO) authentication for Knox API access (non-browser) or browser , so that curl --negotiate is honored instead of redirecting to KnoxSSO? 2-What are the exact prerequisites for enabling /gateway/knoxsso/api/v1/token in CDP? not able to generate token please share specific commands if you want me to try. For our use case triggering oozie workflow using REST API from within cluster and from Informatica DEI server , or which method is supported when Knox SSO is enabled?
... View more
07-12-2022
03:08 PM
@Chakkara As far as i remember , distributed cache is not having consistency . You could use Hbase or HDFS for storing the status of success or failure of the processors for downstream application. Once you saved the Success and Failure at Hbase . You can retrieve it from the Hbase processor using the row ID. Build a REST API NiFi flow to pull the status from Hbase for example HandleHTTPRequest --> FetchHbaseRow - HandleHTTPResponse You can call the HTTP API (Request and Response) via shell script/curl and call the script from Control-M.
... View more
04-05-2022
12:44 AM
Even this is old thread updating . add java-home/bin/keytool -genkey -alias server-alias -keyalg RSA -keypass changeit
-storepass changeit -keystore keystore.jks java-home/bin/keytool -export -alias server-alias -storepass changeit
-file server.cer -keystore keystore.jks This need to be configured on NiFI instance.
... View more
12-10-2021
04:25 AM
Hi , Please find the sample flow for List SFTPand Fetch SFTP processor and put into target HDFS path. 1. Processor ListSFTP - Keep listening input folder for example /opt/landing/project/data from Fileshare server. Once a new file arrival , the listsftp takes only name of the file and pass to FetchSFTP nifi processor to fetch the file from source folder. Properties to mention in ListSFTP processor are highlighted below 2. Once latest file has been idenified by ListSFTP processor , the fetchSFTP processor to fetch the file from Source path. Properties to configure in FetchSFTP processor. 3. In PUTHDFS processor , please configure the highlighted values of your project and required folder. If your cluster is kerberos enabled , please add the kerbers controller service to access HDFS from NiFi. 4. Success and failure relationship of the PutHDFS nifi processor can be used to monitor the Flow status and status can stored in Hbase for queering flow status.
... View more
12-02-2021
10:05 AM
@Griggsy If you can share a source json, it may help with more specific guidance in the community. Thanks, Matt
... View more
10-23-2021
12:23 AM
Hi , Could you please check the user is having the permission to trigger Oozie in Ranger policy (OR) also please check your Oozie workflow xml file is present in HDFS path once .. Normal Basic Auth is fine for accessing Oozie REST APIs ..I am able to perform POST and GET request of Oozie workflow successfully and monitor the status of the workflow in the same script .
... View more
10-13-2021
05:47 AM
Hi @arunek95 Yes,the workaround has been applied by following the community posts. As of now .we don't have any root-cause why many files were in OPENFORWRITE state for particular two days in our cluster. https://community.cloudera.com/t5/Support-Questions/Cannot-obtain-block-length-for-LocatedBlock/td-p/117517 Thanks
... View more
08-18-2021
04:55 PM
Thanks @adhishankarit for your help. The value of H gross amount is in List ["55.00","58.00"] it's coming from flowfile attribute. Will, it is possible to get the attribute value of H gross amount to use Jolt transformation JSON NiFi processor with the same as last element of H gross amount? Thanks In advance.
... View more
07-30-2021
01:09 PM
Hi, Once you extracted the header into Flowfile attribute using ExtractText processor, next you are going to convert the header flowfile attribute into Flow file content OR you can keep the header value as in attribute ..The stack overflow explains about extracting header into flowfile attribute and next they have pass the headers as file into destination .To convert flowfile attribute to file/flowfile content ,we will have to use ReplaceText processor where you can pass flowfile attributes .. The success relationship of ReplaceText will only contains header in flowfile content and the original csv file will be replaced with header as content . The flowfile content, you can transfer to destination or next processor in the flow. Hope this information you are looking for .. Thanks
... View more
07-28-2021
11:09 AM
1 Kudo
@jg6 There is no direct relationship between the DistributedMapCacheServer and the DistributedMapCacheClientService. Meaning that the client is simply configured with a hostname and a port. This hostname and port could be a DistributedMapCacheServer running on an entirely different NiFi cluster somewhere. Additionally there is no component that registers a dependency on the DistributedMapCacheServer controller service. They only have a dependency on the DistributedMapCacheClientService. So when constructing a template only the interconnected and dependent pieces are included. That being said, using the DistributedMapCache is not the cache I would recommend using anyway. IT offers no high Availability (HA). While a DistributedMapCacheServer is being started on every node in a NiFi cluster, they do not talk to one another and the DistributedMapCacheClientService can only be configured to point at one of them. So if you lose the NiFi node were your clients point, you lost all your cache. There are better options for external cache services that do offer HA. Hope this is helpful, Matt
... View more