Member since
03-16-2024
4
Posts
2
Kudos Received
0
Solutions
06-21-2024
05:20 AM
In my case the problem was related to storage issues.
... View more
06-20-2024
10:57 AM
Transferring Data from Multiple Tables in NiFi: NiFi provides processors that allow you to pull data from database tables using JDBC drivers. For Oracle, you can use processors like ExecuteSQL, QueryDatabaseTable, and GenerateTableFetch. To transfer data from multiple tables, consider the following approaches: Individual Flows for Each Table: You can create separate NiFi flows for each table. This approach is straightforward but may require more management. Dynamic SQL Generation: Use the ListDatabaseTables processor to list tables dynamically. Then, use ReplaceText to create SQL statements for each table (using NiFi Expression Language). Finally, send these statements to ExecuteSQL for fetching data. Parallel Fetching: If you have a NiFi cluster, route GenerateTableFetch into a Remote Process Group pointing at an Input Port on the same cluster. Automatic Table Creation in Cassandra Using Avro Schemas: To create tables in Cassandra, you can use the PutCassandraRecord processor. It allows you to put data directly into Cassandra without writing CQL. For schema management, consider using Avro schemas. You can define Avro schemas for your data and use them within your NiFi flow. To handle overwriting tables, you’ll need to manage the logic in your flow. Can Nifi load data from DB2 to Cassandra? - Stack Overflow
... View more
06-14-2024
06:54 AM
2 Kudos
@helk You can use a single certificate to secure all your nodes, but i would not recommend doing so for security reasons. You risk compromising all your host if any one of them is compromised. Additionally NiFi nodes act as clients and not just servers. This means that all your hosts will identify themselves as the same client (based off DN). So tracking client initiated actions back to a specific node would be more challenging. And if auditing is needed, made very difficult. The SAN is meant to be used to differently. Let's assume you host an endpoint searchengine.com which is back by 100 servers to handle client requests. When a client tries to access searchengine.com that request may get routed to anyone of those 100 servers. The certificate issues to each of those 100 servers is unique to each server; however, every single one of them will have the searchengine.com as an additional SAN entry in addition to their unique hostname. This allows the host verification to still be successful since all 100 are also known as searchengine.com. Your specific issue based on shared output above is caused by the fact that your single certificate does not have "nifi01" in the list of Subject Alternative Names (SAN). It appears you only added nifi02 and nifi03 as SAN entries. The current hostname verification specs no longer use DN for hostname verification. Only the SAN entries are used for that. So all names(hostnames, common names, IPs) that may be used when connecting to a host must be included in the SAN list. NiFi cluster keystore requirements: 1. keystore can contain only ONE privateKeyEntry. 2. PrivateKey can not use wildcards in the DN. 3. PrivateKey must contain both clientAuth and serverAuth Extended Key Usage (EKU). 4. Privatekey must contain at least one SAN entry matching the hostname of server on which keystore will be used. The NiFi truststore must contain the complete trust chain for your cluster node's PrivateKeys. On truststore is typically copied to and used on all nodes. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more