Member since
04-24-2017
106
Posts
13
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1417 | 11-25-2019 12:49 AM | |
2493 | 11-14-2018 10:45 AM | |
2246 | 10-15-2018 03:44 PM | |
2120 | 09-25-2018 01:54 PM | |
1942 | 08-03-2018 09:47 AM |
04-26-2018
01:02 PM
I created a workflow in NiFi 1.5.0 that reads a XML file from HDFS. After splitting the file into separate <Transaction> elements, I want to read out an attribute's value and afterwards react by this value. My original XML looks like this: <?xml version="1.0" encoding="UTF-8"?>
<Log>
<Transaction Type="1" TrainingModeFlag="true">
<StoreID>240041</StoreID>
...
</Transaction>
</Log> My Workflow splits the XML via SplitXML in depth 1, so after this processor I have this sub-xml: <?xml version="1.0" encoding="UTF-8"?>
<Transaction Type="1" TrainingModeFlag="true">
<StoreID>240041</StoreID>
...
</Transaction> I want to extract the value of the Type attribute of the Transaction tag, but it doesn't work for me. Here my EvaluateXPath processor: My new content attribute has an empty string set, instead of showing the value 1: Using the XPath //@Type works, but I need the exact path, as the Type attribute can occur in sub-elements. Can someone help?
... View more
Labels:
- Labels:
-
Apache NiFi
04-24-2018
10:02 AM
Reading the Value with the XPath //@Type works fine.
... View more
04-24-2018
09:56 AM
When I try this, I'm getting an empty string back. Can someone explain why? I'm using NiFi 1.5.0. My XML looks like this: <?xml version="1.0" encoding="UTF-8"?>
<Log>
<Transaction Type="1" TrainingModeFlag="true">
<StoreID>240041</StoreID>
...
</Transaction>
</Log> My Workflow splits the XML via SplitXML in depth 1, so after this processor I have this sub-xml: <Transaction Type="1" TrainingModeFlag="true">
<StoreID>240041</StoreID>
...
</Transaction> I want to extract the value of the Type attribute of the Transaction tag, but it doesn't work for me. Here my EvaluateXPath processor: My new content attribute has an empty string set, instead of showing the value 1:
... View more
02-20-2018
08:53 AM
@Simon George Yes, that did the trick for me! Thank you for sharing your work-around! I downloaded the Livy jars, corresponding to my HDP Version from the Hortonworks repo (as shown above), and added them to my Java project: http://repo.hortonworks.com/content/repositories/releases/org/apache/livy/ livy-api-0.4.0.2.6.3.0_235.jar livy-client-http-0.4.0.2.6.3.0_235.jar After adding these libraries, the Spark Pi calculation example worked for me: https://github.com/cloudera/livy#using-the-programmatic-api
... View more
01-12-2018
10:54 AM
1 Kudo
Yes, that was the solution! Thank you very much! Everything works fine with the following statement now: SELECT SUM(menge) menge FROM mytable
... View more
01-11-2018
02:54 PM
1 Kudo
I need to create checksums in my ETL process. Therefore I need to sum up a numeric column in my JDBC-connected table(s). Copying the data works fine so far, but I was not able to find the correct NiFi processor for this aggregation task. How can I calculate the sum of a table column? Which processor to use (is there something similar for the SelectHiveQL processor, which works fine for the same aggregation task)? And how to compare these two results (from two different tables) to check if the sums are equal? I already tried to use a ExecuteSQL processor but there I get an SchemaParseException:
I'm using NiFi 1.4.0 and Hive 1.2.1.
... View more
Labels:
- Labels:
-
Apache NiFi
12-18-2017
07:09 AM
I already did a few reboots, but it didn't help. I upgraded my HDP cluster via Ambari. I'm using Spark 1.6.3, so I don't need to use livy2, right? It seems to be an issue in Livy 0.4.0
... View more
12-15-2017
10:07 AM
I've got the same problem, also after upgrading to HDP 2.6.3!
... View more
11-30-2017
07:17 AM
If I have this template now, I'll try to change my workflow. Thank you very much for this easy-looking solution!
... View more
11-29-2017
07:05 AM
First of all, thank you for the fast reply! I'm using NiFi version 1.3.0 and the original column in my table is of type Double. I'm reading out the data by a QueryDatabaseTable processor, afterwards calling a ConvertAvroToJson. To split the array into separete objects I'm using a SplitJson processor. So far, everything seems to be good. Each object looks like this: {"ID":null,"Timestamp":"2017-05-17 17:45:55.0","Name":"asdhybridUL2N","Value":225.0} In order to remove the "ID" attribute (is mostly null and not needed later), adding the "tags" attribute (which shall contain the fix values (a=b, c=d) as shown in the question) and changing "Name" attribute to "metric", I use a mix of UpdateAttribute, AttributesToJson and ReplaceText processors. And that's were the type of the value attribute changes to String. I also saw this JoltTransformJSON processor, but to be honest, I'm not the JSON expert and I didn't understand how to use it in my case.
... View more