Member since
08-06-2024
6
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
229 | 08-16-2024 01:44 AM |
08-16-2024
01:44 AM
2 Kudos
@varungupta wrote: In Apache NiFi, We are interacting to SOAP api through powershell. As API expect attachment with MTOM. Official Site Now if we load 10000 request we can see only one server is acting on those requeset while other remainnig 2 server are sitting idle. Is there way to share the load acrooss all three servers? Hello, It sounds like you’re encountering a load balancing issue in your Apache NiFi cluster. To ensure that all three servers share the load, you can use NiFi’s built-in load balancing features. Here are some steps to help you achieve this: Enable Load-Balanced Connections: NiFi allows you to configure connections to distribute data across the cluster. You can set the load balancing strategy for each connection. The available strategies are: Do Not Load Balance: Default setting, no load balancing. Round Robin: Distributes FlowFiles evenly across all nodes. Single Node: Sends all FlowFiles to a single node. Partition by Attribute: Distributes FlowFiles based on a specific attribute. Configure Load Balancing: Open your NiFi flow and select the connection you want to load balance. In the connection settings, go to the Load Balancing tab. Choose the appropriate load balancing strategy, such as Round Robin. Check Node Status: Ensure all nodes in your cluster are active and properly connected. You can check the status of your nodes in the NiFi Cluster Manager. Monitor Performance: Use NiFi’s monitoring tools to observe the distribution of FlowFiles and ensure that the load is balanced across all nodes. Here’s a quick example of how to set up a load-balanced connection: Select the Connection: Click on the connection between processors in your NiFi flow. Load Balancing Tab: In the connection settings, navigate to the Load Balancing tab. Choose Strategy: Select Round Robin to distribute FlowFiles evenly across all nodes. By following these steps, you should be able to distribute the load more evenly across your NiFi cluster, ensuring that all servers are utilized efficiently Hope this will help you. Bets regards, florence0239
... View more
08-12-2024
11:00 PM
1 Kudo
@Adyant001 wrote: Need to save Json data to multiple child tables. How should i do? choice advantage login Hello, To save JSON data to multiple Oracle tables, use the JSON_TABLE function to parse the JSON and then insert the parsed data into the respective tables. Here’s a concise example: Parse JSON Data SELECT *
FROM JSON_TABLE(
'{"employee": {"id": 1, "name": "John Doe", "department": "Sales", "address": {"street": "123 Main St", "city": "Anytown", "state": "CA"}}}',
'$.employee'
COLUMNS (
id NUMBER PATH '$.id',
name VARCHAR2(50) PATH '$.name',
department VARCHAR2(50) PATH '$.department',
street VARCHAR2(50) PATH '$.address.street',
city VARCHAR2(50) PATH '$.address.city',
state VARCHAR2(2) PATH '$.address.state'
)
) jt; Insert Data into Tables -- Insert into employee table
INSERT INTO employee (id, name, department)
SELECT id, name, department
FROM JSON_TABLE(
'{"employee": {"id": 1, "name": "John Doe", "department": "Sales", "address": {"street": "123 Main St", "city": "Anytown", "state": "CA"}}}',
'$.employee'
COLUMNS (
id NUMBER PATH '$.id',
name VARCHAR2(50) PATH '$.name',
department VARCHAR2(50) PATH '$.department'
)
);
-- Insert into address table
INSERT INTO address (employee_id, street, city, state)
SELECT id, street, city, state
FROM JSON_TABLE(
'{"employee": {"id": 1, "name": "John Doe", "department": "Sales", "address": {"street": "123 Main St", "city": "Anytown", "state": "CA"}}}',
'$.employee'
COLUMNS (
id NUMBER PATH '$.id',
street VARCHAR2(50) PATH '$.address.street',
city VARCHAR2(50) PATH '$.address.city',
state VARCHAR2(2) PATH '$.address.state'
)
); This should help you get started! Best regards, florence0239
... View more
08-09-2024
02:31 AM
1 Kudo
@s198 wrote: My requirement is to retrieve the total number of files in a given HDFS directory and based on the number of files proceed with the downstream flow Health Insurance Market I cannot use the ListHDFS processor as it does not allow inbound connections. The GetHDFSFileInfo processor generates flowfiles for each HDFS file, causing all downstream processors to execute the same number of times. I have observed that we can use ExecuteStreamCommand to invoke a script and execute HDFS commands to get the number of files. I would like to know if we can obtain the count without using a script? Or if there is any other option available besides the above. Hello, To retrieve the total number of files in a given HDFS directory without using a script, you can use the ExecuteStreamCommand processor in Apache NiFi to run HDFS commands directly. However, if you prefer not to use scripts, you can leverage the ExecuteScript processor with a simple Groovy script to achieve this. Here’s how you can do it using the ExecuteScript processor: Add the ExecuteScript Processor: Drag and drop the ExecuteScript processor onto your NiFi canvas. Configure the ExecuteScript Processor: Set the Script Engine to Groovy. In the Script Body, use the following Groovy script to count the files in the HDFS directory: import org.apache.nifi.processor.io.StreamCallback
import java.nio.charset.StandardCharsets
def flowFile = session.get()
if (!flowFile) return
def hdfsDir = '/path/to/hdfs/directory'
def command = "hdfs dfs -count ${hdfsDir}"
def process = command.execute()
process.waitFor()
def output = process.in.text
def fileCount = output.split()[1] // Assuming the second column is the file count
flowFile = session.putAttribute(flowFile, 'file.count', fileCount)
session.transfer(flowFile, REL_SUCCESS) Set the HDFS Directory Path: Replace /path/to/hdfs/directory with the actual path to your HDFS directory. Connect the Processor: Connect the ExecuteScript processor to the downstream processors that need the file count. Run the Flow: Start the flow and the ExecuteScript processor will count the files in the specified HDFS directory and add the count as an attribute to the flow file. This approach avoids the need for a separate script file and keeps everything within NiFi. The ExecuteScript processor runs the HDFS command and extracts the file count, which can then be used in your downstream processors. Hope this will help you. Best regards, florence0239
... View more
08-08-2024
02:10 AM
@Kondaji wrote: Hi Team, I am facing the issue, while trying to fetch the value from jsonpathreader with avroschema attributes as below. json reading from avro schema with jsonpathreader, and want to fetch home number with expression $.phoneNumbers[?(@.type=="home")].number. However, it is getting failed. Socrates GM Please help on this. {
"firstName": "John",
"lastName": "Smith",
"isAlive": true,
"age": 25,
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": "10021-3100"
},
"phoneNumbers": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "office",
"number": "646 555-4567"
}
],
"children": [],
"spouse": null
} Hello, I understand your frustration with this issue. It seems like the JSONPath expression you’re using should work, but there might be a few things to check: Ensure JSONPathReader is correctly configured: Verify that the JSONPathReader is set up properly in your NiFi flow. Check the Avro Schema: Make sure the Avro schema matches the structure of your JSON data. Validate JSONPath Expression: Double-check the JSONPath expression $.phoneNumbers[?(@.type=="home")].number for any syntax errors. Hope this will help you. Best regards, florence0239
... View more
08-06-2024
04:35 AM
@PriyankaMondal wrote: Hi Team, I want to achive the below mentioned transformation in Nifi. using any processor. Please help me to get this done. my lowes life sample input: { "date": "35 days 11:13:10.88", "key1": "value1", "keyToBeMapped1": "hostname.com", "key2": "value2", "key3": "value3", "key4": "value4", "keyToBeMapped2": "High Paging Rate", "key5": "PAGING", "keyToBeMapped3": "A high paging activity has been detected on host abc.lab.com. This could mean that too many processes are being run", "Entity OID": "keyToBeMapped1", "Parameter": "keyToBeMapped2", "Description": "keyToBeMapped3" } Expected Output: { "date": "35 days 11:13:10.88", "key1": "value1", "keyToBeMapped1": "hostname.com", "key2": "value2", "key3": "value3", "key4": "value4", "keyToBeMapped2": "High Paging Rate", "key5": "PAGING", "keyToBeMapped3": "A high paging activity has been detected on host abc.lab.com. This could mean that too many processes are being run", "Entity OID": "hostname.com", "Parameter": "High Paging Rate", "Description": "A high paging activity has been detected on host abc.lab.com. This could mean that too many processes are being run" } Regards, Priyanka Hello, You can achieve this transformation in NiFi using the JoltTransformJSON processor. Jolt is a JSON-to-JSON transformation library that allows you to specify transformations in a declarative way. Here’s how you can set it up: Steps to Configure JoltTransformJSON Processor Add the JoltTransformJSON Processor: Drag and drop the JoltTransformJSON processor onto your NiFi canvas. Configure the Processor: Double-click the processor to open its configuration dialog. Go to the Properties tab. Set the Jolt Specification: In the Jolt Specification property, you will define the transformation rules. Here’s the Jolt spec you need for your transformation: [
{
"operation": "shift",
"spec": {
"date": "date",
"key1": "key1",
"keyToBeMapped1": "keyToBeMapped1",
"key2": "key2",
"key3": "key3",
"key4": "key4",
"keyToBeMapped2": "keyToBeMapped2",
"key5": "key5",
"keyToBeMapped3": "keyToBeMapped3",
"Entity OID": "@(1,keyToBeMapped1)",
"Parameter": "@(1,keyToBeMapped2)",
"Description": "@(1,keyToBeMapped3)"
}
}
] Apply the Configuration: Click Apply to save the configuration. Connect the Processor: Connect the JoltTransformJSON processor to the next processor in your flow. Explanation of the Jolt Specification The operation is set to shift, which means we are mapping fields from the input JSON to the output JSON. The spec defines the mapping rules. For example, "Entity OID": "@(1,keyToBeMapped1)" means that the value of keyToBeMapped1 should be assigned to Entity OID in the output JSON. Example Flow GenerateFlowFile (to simulate input JSON) JoltTransformJSON (with the above specification) LogAttribute (to log the transformed JSON) This setup should transform your input JSON to the expected output format. Hope this will help you. Best regards, florence023
... View more