Member since
11-17-2021
1148
Posts
258
Kudos Received
30
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 169 | 04-23-2026 02:02 PM | |
| 554 | 03-17-2026 05:26 PM | |
| 5228 | 11-05-2025 10:13 AM | |
| 875 | 10-16-2025 02:45 PM | |
| 1454 | 10-06-2025 01:01 PM |
03-07-2026
06:45 AM
certificate used in ranger KMS expired which caused the pyspark job to fail. we renewed the certificate and update KMS configuration. now jobs are running fine.
... View more
03-05-2026
09:53 AM
@DevOpsWorld Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
02-24-2026
07:35 AM
Thanks to the issue created by @svenvk this was at least partially addressed in NiFi 2.8.0. The problem was that newer Avro/Parquet libraries are using Java 8 Datatypes instead of Long/Integer to represent timestamps. I say partially, because the fix only implements the Avro types: time-millis time-micros timestamp-millis timestamp-micros Missing is the type for nanosecond precision "timestamp-nanos" as well as the three "local-timestamp-{millis,micros,nanos}" types. See https://avro.apache.org/docs/1.12.0/specification/#timestamps Additionally, the new parquet viewer when displaying a flow file seems to suffer from the same problem that created the error described here. I will create new issues in the NiFi Jira 🙂
... View more
02-13-2026
09:47 AM
@FGhidotti I have reached out via DM with further steps, thank you!
... View more
02-06-2026
02:55 PM
Thanks Asish! In our product, we've already bumped up our Hive version to 4.1 and connecting from a Hive 4.1 client to Hive 3.x doesn't work in many instances. We've made some changes to Hive 4.1 to allow slightly more backwards compatibility but there are some instances where serialized objects are used and can't be deserialized because of version mismatches. Thanks, Moe
... View more
01-30-2026
09:11 PM
1 Kudo
Here are some highlights from the month of December
68 new support questions
2863 new members
WEBINAR
The Future of Agents with Private AI
Watch Now
WEBINAR
A Leader's Fireside Chat about Securing Next-Gen AI
Watch Now
Check out the FY26 Cloudera Meetup Events Calendar for upcoming & past event details!
We would like to recognize the below community members and employees for their efforts over the last month to provide community solutions.
See all our top participants at Top Solution Authors leaderboard and all the other leaderboards on our Leaderboards and Badges page.
@MattWho @vafs @venkatsambath @pajoshi @upadhyayk04 @Pedro_E @Bern @blackboks @zzzz77
Share your expertise and answer some of the below open questions. Also, be sure to bookmark the unanswered question page to find additional open questions.
Unanswered Community Post
Components/ Labels
LPAD/RPAD Parsing Issues
Apache NiFi
ConsumeKafka_2_6 leaking lot of memory in apache nifi 1.25
Apache NiFi
... View more
01-10-2026
11:06 PM
@EmilKle FYI ➤This issue typically arises because the comma (,) is a reserved delimiter in HBase for the hbase:meta table structure, used to separate the Table Name, Start Key, and Region ID. When a rowkey is inserted with an unexpected comma, the HBase shell and client API often misinterpret the entry as a malformed region name, causing GET or DELETE commands to route incorrectly or fail validation. ➤Here is how you can approach a safe repair, as traditional methods like HBCK2 fixMeta often bypass these "illegal" keys if they don't follow the expected region naming convention. 1. Use the HBase Shell with Hexadecimal Rowkeys Standard string-based commands in the shell often fail because the shell parses the comma as a delimiter. Instead, find the exact byte representation of the rowkey and delete it using hexadecimal notation. 1. Find the Hex Key: Run a scan to get the exact bytes of the corrupted row. scan 'hbase:meta', {ROWPREFIXFILTER => 'rowkey,'} 2. Delete using the binary string: If the rowkey is exactly rowkey,, use the binary notation in the shell: delete 'hbase:meta', "rowkey\x2C", 'info:regioninfo' (Note: \x2C is the hex code for a comma).
... View more
01-10-2026
10:40 PM
@scala_ FYI ➤ It appears you have performed an exhaustive verification of the standard Kerberos and HBase configurations. The "GSS initiate failed" error in a Kerberized HBase environment, especially when standard connectivity and ticket validation pass, often points to subtle mismatches in how the Java process handles the security handshake or how the underlying OS interacts with the Kerberos libraries. ➤ Based on the logs and environment details you provided, here are the most likely remaining causes for this issue: 1. Java Cryptography Extension (JCE) and Encryption Types While you confirmed support for AES256 in krb5.conf, the Java Runtime Environment (JRE) itself may be restricting it. -The Issue: Older versions of Java 8 require the JCE Unlimited Strength Jurisdiction Policy Files to be manually installed to handle 256-bit encryption. If the Master is sending an AES256 ticket but the RegionServer's JVM is restricted, the GSS initiation will fail. -The Fix: Ensure the JCE policy files are installed, or if using a modern OpenJDK, ensure the java.security file allows all encryption strengths. You can also try restricting permitted_enctypes in krb5.conf to aes128-cts-hmac-sha1-96 temporarily to see if the connection succeeds with a lower bit-rate. 2. Reverse DNS (RDNS) Mismatch Kerberos is extremely sensitive to how hostnames are resolved. -The Issue: Even with entries in /etc/hosts, Java's GSSAPI often performs a reverse DNS lookup on the Master's IP. If the IP 10.51.39.121 (from your previous logs) resolves to a different hostname (or no hostname at all) than what is in your keytab (host117), the "GSS initiate" will fail. -The Fix: Add rdns = false to the [libdefaults] section of your /etc/krb5.conf on all nodes. This forces Kerberos to use the hostname provided by the application rather than trying to resolve the IP back to a name. 3. Service Principal Name (SPN) Case Sensitivity In hbase-site.xml, the principals are often defined with _HOST placeholders. -The Issue: If hbase.master.kerberos.principal is set to hbase/_HOST@REALM, HBase replaces _HOST with the fully qualified domain name (FQDN). If your system reports the FQDN as host117.kfs.local but the Kerberos Database (KDB) only has hbase/host117@REALM, the handshake fails. -The Fix: Ensure the output of the hostname -f command exactly matches the principal stored in the keytab. 4. JAAS "Server" vs. "Client" Sections Your earlier logs mentioned: “Added the Server login module in the JAAS config file.” -The Issue: In HBase, the RegionServer acts as a Client when connecting to the Master. If your JAAS configuration only has a Server section and is missing a Client section (or if the Client section has incorrect keytab details), the RegionServer will fail to initiate the GSS context toward the Master. -The Fix: Ensure your JAAS file contains both sections, and that the Client section points to the correct RegionServer keytab/principal.
... View more
01-10-2026
10:22 PM
@jkoral FYI ➤ Based on the logs provided, the checkpoint failure is caused by an authentication mismatch during the FSImage upload process, further complicated by an underlying storage type configuration issue ➤ Primary Reason: Authentication Failure (403 Forbidden) The Standby NameNode (SNN) successfully performs the checkpoint locally but fails to upload the merged fsimage back to the Active NameNode (NN). -The Error: The SNN logs report: java.io.IOException: Exception during image upload: Response: 403 (Forbidden), Message: Non-exception fault: Authentication failed. -The Mechanism: After merging the edits, the SNN attempts to POST the new image to the NN via HTTP. The NN rejects this request because it cannot verify the identity of the SNN, which is common in new clusters where Kerberos or shared secret configurations are not fully synchronized. ➤ Recommended Fixes Verify HTTP Authentication: Check the dfs.namenode.secondary.http-address and dfs.namenode.http-address settings. Ensure the hdfs user has consistent permissions across both hosts. Check Firewall/SELinux: Since this is RHEL9, ensure that the SNN can communicate with the NN on port 9870 (or 9871 if SSL is enabled).
... View more
12-31-2025
04:37 PM
1 Kudo
Here are some highlights from the month of November
42 new support questions
2124 new members
WEBINAR2026 Trends in Data and AI Date: January 14, 2026 Time: 8:00 am PT | 11:00 am ET | 1600 GMT Register Now
WEBINAR The Future of Agents with Private AI
Watch Now
Check out the FY25 Cloudera Meetup Events Calendar for upcoming & past event details!
We would like to recognize the below community members and employees for their efforts over the last month to provide community solutions.
See all our top participants at Top Solution Authors leaderboard and all the other leaderboards on our Leaderboards and Badges page.
@MattWho @vafs @BFBounteous @akuser @PeterKa @casaui @Jaguar
Share your expertise and answer some of the below open questions. Also, be sure to bookmark the unanswered question page to find additional open questions.
Unanswered Community Post
Components/ Labels
Best Practice for configuring registry flows
Apache NiFi
Can Apache Hadoop run reliably inside Istio service mesh with mTLS enabled?
Apache YARN
How Nifi handles huge 1 TB files ?
Apache NiFi
LPAD/RPAD Parsing Issues
Apache NiFi
ConsumeKafka_2_6 leaking lot of memory in apache nifi 1.25
Apache NiFi
JOLT to flatten nested JSON
Apache NiFi
... View more