Member since
08-08-2024
43
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
218 | 09-08-2025 01:12 PM | |
336 | 08-22-2025 03:01 PM |
10-16-2025
09:42 AM
Understood. Hopefully the missing NAR's pointed on the previous update help you figure the issue.
... View more
10-15-2025
12:13 PM
Hello @pnac03, Have you checked if the truststore has the CA cert from the NiFi Registry imported? keytool -list -keystore /path/to/truststore.jks If is not listed there, you will need to import it : keytool -importcert -alias 3SCDemo-CA -file /tmp/ca-cert.pem -keystore /path/to/truststore.jks
... View more
10-15-2025
12:04 PM
Hello @mbraunerde, I see you mentioned that you're using NiFi 2.5.0, I think that version is not provided for Cloudera on the CFM right? Even the most recent CFM does not have NiFi 2.5.0, the latest is CFM 4.10 with NiFi 2.3.0. I ask because the Cloudera provided CFM do have Parquet already included: PutParquet https://docs.cloudera.com/cfm/4.10.0/release-notes/topics/cfm-supported-processors.html Now, if you want to add it on a custom NiFi install, you should import those NAR already loaded but also you need nifi-standard-services-api-nar and nifi-record-serialization-services-nar. You can take a look here: https://mvnrepository.com/artifact/org.apache.nifi/nifi-standard-services-api-nar https://mvnrepository.com/artifact/org.apache.nifi/nifi-record-serialization-services-nar
... View more
10-14-2025
11:41 AM
Hello @AlokKumar, Thanks for using Cloudera Community. As I understand, what you need is to add one more step in your flow: HandleHttpRequest-> MergeContent -> ExecuteScript (Groovy)-> HandleHttpResponse Since you have JSON fields and files, you're getting multiple FlowFiles. So this extra MergeContent phase will combine the JSON and the file into a single FlowFile On the MergeContent, set Merge Strategy as “Defragment” and set Correlation Attribute Name as http.request.id. that is unique from each HandleHttpRequest
... View more
10-06-2025
01:18 PM
Hello @Brenda99, The question is very wide, there are many things that can help to improve the performance. Some basic recomendations are documented here: https://docs.cloudera.com/cdp-private-cloud-base/7.3.1/tuning-spark/topics/spark-admin_spark_tuning.html Take a look on the documentation, that could help you. Also, it will worth to talk with the team in charge of your account to found deeper performance tuning analysis.
... View more
09-18-2025
10:09 AM
Yes, you're right. Looks like Java Kerberos makes the applications to not always have an application name that we can use here. I was reading about other option that makes the processes to fallback from one to another enctype. But that will need to have "allow_weak_crypto = true" and as you mentioned that is not possible in your scenario. Not sure if what you need is possible somehow.
... View more
09-15-2025
09:56 PM
Hello @asand3r, Glad to see you on the community. Directly on NiFi you cannot specify the those encryptions per processor. What comes to my mind is to configure per realm user, this should work. In the krb5.conf you can tell specifically for each realm user, something like this: [appdefaults] hdfs = { default_tgs_enctypes = arcfour-hmac-md5 aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 permitted_enctypes = arcfour-hmac-md5 aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 } This will target any application using a principal with 'hdfs' in its name. You may need to be more specific in some cases, for example, using the full principal name. In your NiFi HDFS processors, you'll need to set the Kerberos Principal property to a value that matches the [appdefaults] section.
... View more
09-15-2025
09:44 PM
Hello @Jack_sparrow That should be possible. You don't need to manually specify partitions or HDFS paths; Spark handles this automatically when you use a DataFrameReader. First, you will need to read the source table using "spark.read.table()". Since table is a Hive partitioned table, Spark will automatically discover and read all 100 partitions in parallel, as long as you have enough executors and cores available. Then, Spark creates a logical plan to read the data. Repartition the data is next, To ensure you have exactly 10 output partitions and to control the parallelism for the write operation, you can use the "repartition(10)" method. This will shuffle the data to create 10 new partitions, which will be processed by 10 different tasks. And then, write the table. Use "write.saveAsTable()". You must specify the format using ".format("parquet")."
... View more
09-11-2025
04:33 PM
Hello @ShellyIsGolden, Glad to see you in our community. Welcome! ChatGPT was not that wrong (😆), in fact that makes sense. The PostgreSQL documentation refers that method as possible: String url = "jdbc:postgresql://localhost:5432/postgres?options=-c%20search_path=test,public,pg_catalog%20-c%20statement_timeout=90000"; https://jdbc.postgresql.org/documentation/use/#connection-parameters Have you tested the JDBC connection outside of NiFi? Maybe with psql command like this: psql -d postgresql://myurl:5432/mydatabase?options=-c%20search_path=myschema,public&stringtype=unspecified Also, check with your PG team to see if that connect string is possible and test more on that side.
... View more
09-11-2025
02:37 PM
Hello @ariajesus, Welcome to our community. Glad to see you here. How did you create the resource? As a File Resource or as a Python Environment? Here are the steps how you can create it: https://docs.cloudera.com/data-engineering/1.5.4/use-resources/topics/cde-create-python-virtual-env.html
... View more