Member since
03-06-2020
406
Posts
56
Kudos Received
37
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 394 | 08-29-2025 12:27 AM | |
| 1021 | 11-21-2024 10:40 PM | |
| 978 | 11-21-2024 10:12 PM | |
| 3048 | 07-23-2024 10:52 PM | |
| 2154 | 05-16-2024 12:27 AM |
06-11-2024
10:29 PM
1 Kudo
Hi @rizalt The error is because you have not provided keytab path here the command should look like below: > klist -k example.keytab To create the keytab you can refer any of below steps: $ ktutil
ktutil: addent -password -p myusername@FEDORAPROJECT.ORG -k 42 -f
Password for myusername@FEDORAPROJECT.ORG:
ktutil: wkt /tmp/kt/fedora.keytab
ktutil: q Then kinit -kt /tmp/kt/fedora.keytab myusername@FEDORAPROJECT.ORG Note: Replace the username and REALM as per your cluster configurations. Regards, Chethan YM
... View more
06-07-2024
04:17 AM
1 Kudo
2. An alternative is to write a script (e.g., Bash) that interacts with Hive and potentially your desired output format.
... View more
05-24-2024
04:05 AM
1 Kudo
Sssd issue on unix this is resolved
... View more
05-24-2024
04:04 AM
1 Kudo
Was a Kerberos issue this is resolved
... View more
05-22-2024
10:52 PM
@vlallana, Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
05-16-2024
09:44 PM
1 Kudo
Hi @ChethanYM, I did a pg_dump and grep'd the old namenode DNS on my hive metastore and found the table locations in there referenced the old DNS. Setting the table location to the new namenode with 'alter table <table> set location <new location>' in Hive fixed the issue. Thanks for your help! David
... View more
05-08-2024
02:43 PM
For doc purpose and if it could be helpful to someone We took krb5.ini that was used at the CDP cluster and saved it to client WIN server We used LogLevel=6 LogPath=<some-path> in our jdbc URI to enable trace level log Based on findings from the trace level logs java.security.auth.login.config was pointing to an incorrect login module. Since we turned on memory based cache, removing pointer to the java.security.auth.login.config forced correct tgt ticket to be picked. We did not opt for a custom jaas.conf either. There were minor tweak of domain & realm value. This resolved our issue.
... View more
05-01-2024
05:28 AM
@Anderosn 1. If the content of your flow file is too large to be inserted into a single CLOB column, you can split it into smaller chunks and insert each chunk into the database separately. 2. Instead of storing the content in a CLOB column, you can consider storing it in a BLOB (Binary Large Object) column in your database. BLOB columns can store binary data, including large files, without the size limitations of CLOB columns. 3. Store the content of the flow file in an external storage system (e.g., HDFS, Amazon S3) and then insert the reference (e.g., file path or URL) into the database. This approach can be useful if the database has limitations on the size of CLOB or BLOB columns 4. If ExecuteScript is not approved, consider using an external script or application to perform the insertion into the database. You can trigger the script or application from NiFi using ExecuteProcess or InvokeHTTP processors Regards, Chethan YM
... View more
04-15-2024
10:17 AM
2 Kudos
This is how to resolve this problem: To resolve this issue, set this property to 0 and restart Impala: CM > Impala > Configuration > Impala Daemon command-line safety valve: -idle_client_poll_period_s=0 This is a startup flag, not a query option. Its default value is 30 seconds and that is why the session in the above excerpt was closed after 30 secs. By setting it to 0, Impala will not periodically check the client connection. The client connections will remain open until they are explicitly closed on the client applications' side.
... View more