Member since
10-11-2022
121
Posts
20
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
686 | 11-07-2024 10:00 PM | |
1202 | 05-23-2024 11:44 PM | |
1042 | 05-19-2024 11:32 PM | |
5351 | 05-18-2024 11:26 PM | |
2058 | 05-18-2024 12:02 AM |
11-17-2024
09:22 PM
@Bhavs, Did the response help resolve your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
11-11-2024
02:35 AM
1 Kudo
Thanks ~ Good Answer~
... View more
06-03-2024
03:49 PM
1 Kudo
@sibin Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
05-27-2024
05:11 AM
2 Kudos
@MattWho To authenticate to the web ui in NiFi i use the ldap credentials (myuser). For Kerberos authentication via shell I use myuser@REALM. After setting the following parameters in nifi: nifi.security.identity.mapping.pattern.kerb=^(.*?)(?:@.*?)$
nifi.security.identity.mapping.value.kerb=$1
nifi.security.identity.mapping.transform.kerb=NONE Now the token via kerberos works and I no longer get permission errors. Thanks! Lorenzo
... View more
05-23-2024
11:44 PM
@drewski7 Ensure that the Kerberos tickets are being refreshed properly for the HBase REST server. Stale or expired tickets might cause intermittent authorization issues. Check the Kerberos cache to ensure that it is being updated correctly when group memberships change in LDAP. Restart the HBase REST server after making changes to the LDAP group and running the user sync to see if it resolves the inconsistency. Analyze the HBase REST server logs more thoroughly, especially the messages related to unauthorized access and Kerberos thread issues. Look for patterns or specific errors that could provide more clues. Verify the settings for ranger.plugin.hbase.policy.pollIntervalMs and ranger.plugin.hbase.authorization.cache.max.size again, and experiment with lowering the poll interval to see if it improves the responsiveness of policy changes. In the Ranger Admin UI, after running the user sync, manually refresh the policies for HBase and observe if this action has any immediate effect on the authorization behavior. Confirm that there are no discrepancies in the policies displayed in the Ranger Admin UI and the actual enforcement in HBase. Double-check the synchronization between FreeIPA LDAP and Ranger. Ensure that the user sync is not just updating the Ranger Admin UI but is also effectively communicating changes to all Ranger plugins. Review the user sync logs to verify that all changes are processed correctly without errors.
... View more
05-21-2024
07:44 AM
Thanks a lot for the information RAGHUY and MathWho for the comments. I will accept the solution. We have a lot of integrations built on nifi 1.X so I think we will try to find a way to overcome the problem at least for our versions and then migrate to 2.X when possible. Thanks again!!
... View more
05-20-2024
01:38 PM
@galt @RAGHUY Let me add some correction/clarity to the accepted solution. Export and Modify Flow Configuration: Export the NiFi flow configuration, typically in XML format. This can be done via the NiFi UI or by utilizing NiFi's REST API. Then, manually adjust the XML to change the ID of the connection to the desired value. It is not clear here what is being done. The only way to export a flow configuration from NiFi in XML format is via generating a NiFi template (deprecated and removed in Apache NIFi 2.x versions). Even if you were to generate a template and export is via NiFi UI or NiFi's rest-api, modifying it will not change what is on the canvas. If you were to modify the connection component UUID in all places in the template. Upon upload of that template back in to NiFI, you would need to drop the template on the the canvas which would result in every component in that template getting a new UUID. So this does not work. In newer version of NiFi 1.18+ NiFi supports newer flow definitions which are in json format. but same issue persists here when using flow definitions in this manor. In a scenario like the one described in this post where user removed a connection by mistake and then re-created it, the best option is to restore/revert the previous flow. Whenever a change is made to the canvas, NIFi auto archives the current flow.xml.gz (legacy) and flow.json.gz (current) file in to an archive sub-directory and generates a new flow.xml.gz/flow.json.gz file. Best and safest approach approach is to shutdown all nodes in your NiFi cluster. Navigate to the NiFi conf directory and swap current flow.xml.gz/flow.json.gz files with the archived flow.xml.gz/flow.json.gz files still containing the connection with original needed ID. When above is not possible (maybe change went unnoticed for to long and all archive version have new connection UUID), you need to manually modify the flow.xml.gz/flow.json.gz files. Shutdown all your NiFi nodes to avoid any changes being made on canvas while performing following steps. Option 1: Make backup of current flow.xml.gz and flow.json.gz Search each file for original UUID to make sure it does not exist. On one node manually modify the flow.xml.gz and flow.json.gz files by locating the current bad UUID and replacing it with the original needed UUID. Copy the modified flow.xml.gz and flow.json.gz files to all nodes in the cluster replacing original files. this is possible since all nodes run same version of flow. Option 2: same as option 1 same as option 1 same as option 1 Start NiFi only on the node where you modified the flow.xml.gz and flow.json.gz files. On all other nodes still stopped, remove or rename the flow.xml.gz and flow.json.gz files. Start all the remaining nodes. since they do not have a flow.xml.gz or flow.json.gs to load, they will inherit the flow from the cluster as they join the cluster. NOTE: The flow.xml.gz was replaced by the newer flow.json.gz format starting with Apache NiFi 1.16. When NiFi is 1.16 or newer is started with and only has a flow.xml.gz file, it will load from flow.xml.gz and then generate the new flow.json.gz format. Apache NiFi 1.16+ will load only from the flow.json.gz on startup when that file exists, but will still write out both the flow.xml.gz and flow.json.gz formats anytime a change is made to the canvas. With Apache NiFi 2.x+ version the flow.xml.gz format will go away. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-20-2024
12:36 PM
@SAMSAL This is not a new problem, but rather something that has existed with NiFi on Windows fro a very long time. You'll need to avoid using space in directory names or warp that directory name in quotes to avoid the issue. NIFI-200 - Bootstrap loader doesn't handle directories with spaces in it on Windows Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-19-2024
11:32 PM
1 Kudo
@ChineduLB Apache Impala does not enable multi-statement transactions, so you cannot perform an atomic transaction that spans many INSERT statements directly. You can achieve a similar effect by combining the INSERT INTO commands into a single INSERT INTO... SELECT statement that includes a UNION ALL. This method assures that all partitions are loaded within the same query run. you can consolidate your insert statements into one query INSERT INTO client_view_tbl PARTITION (cobdate, region) SELECT col, col2, col3, '20240915' AS cobdate, 'region1' AS region FROM region1_table WHERE cobdate = '20240915' UNION ALL SELECT col, col2, col3, '20240915' AS cobdate, 'region2' AS region FROM region2_table WHERE cobdate = '20240915' UNION ALL SELECT col, col2, col3, '20240915' AS cobdate, 'region3' AS region FROM region3_table WHERE cobdate = '20240915'; Single Query Execution: This approach consolidates multiple INSERT statements into one, which can improve performance and ensure consistency within the query execution context. Simplified Management: Managing a single query is easier than handling multiple INSERT statements. Ensure that your source tables (region1_table, region2_table, region3_table) and the client_view_tbl table have compatible schemas, especially regarding the columns being selected and inserted. Be mindful of the performance implications when dealing with large datasets. Test the combined query to ensure it performs well under your data volume. By using this combined INSERT INTO ... SELECT ... UNION ALL approach, you can effectively populate multiple partitions of the client_view_tbl table in one query. "please accept it as a solution if it it helps"
... View more
05-13-2024
01:34 AM
1 Kudo
Thank you for the response @RAGHUY. Would these be the only steps we need to perform or based on your experience, you have identified anything additional which is not mentioned in the documentation you shared. Thanks snm1523
... View more