Member since
11-16-2015
892
Posts
650
Kudos Received
245
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5664 | 02-22-2024 12:38 PM | |
1388 | 02-02-2023 07:07 AM | |
3085 | 12-07-2021 09:19 AM | |
4205 | 03-20-2020 12:34 PM | |
14158 | 01-27-2020 07:57 AM |
07-01-2020
03:12 AM
You cannot share states between processor, each processor has it's own state.
... View more
06-01-2020
08:43 PM
Is this (now) considered a NiFi "anti-pattern"? Do you have any idea how to do this using NiFi Record serialization services? I'm under the impression that creating thousands of content files is not the best practice by today's standards, but I'm not sure how to use InvokeHTTP on a full set of records without splitting it into many flowfiles. Any ideas?
... View more
03-31-2020
08:56 AM
For example I want to transform the zabbix payload from v4.0 to v4.4: Zabbix Json Payload v4.0 (INPUT) {
"hosts": [
"Host B",
"Zabbix Server"
],
"groups": [
"Group X",
"Group Y",
"Group Z",
"Zabbix servers"
],
"tags": [
{
"tag": "availability",
"value": ""
},
{
"tag": "data center",
"value": "Riga"
}
],
"name": "Either Zabbix agent is unreachable",
"clock": 1519304285,
"ns": 123456789,
"eventid": 42,
"value": 1
} the JOLT transform: [
{
"operation": "shift",
"spec": {
"hosts": {
"*": [
"hosts.[&].host",
"hosts.[&].name"
]
},
"*": "&"
}
}
] The result ( Zabbix v4.4) {
"hosts" : [ {
"host" : "Host B",
"name" : "Host B"
}, {
"host" : "Zabbix Server",
"name" : "Zabbix Server"
} ],
"groups" : [ "Group X", "Group Y", "Group Z", "Zabbix servers" ],
"tags" : [ {
"tag" : "availability",
"value" : ""
}, {
"tag" : "data center",
"value" : "Riga"
} ],
"name" : "Either Zabbix agent is unreachable",
"clock" : 1519304285,
"ns" : 123456789,
"eventid" : 42,
"value" : 1
}
... View more
03-24-2020
05:23 AM
Hello mburgess, I am able to extract attributes from using GetFile>SplitXml> EvaluateXQuery processor. Now, can you please tell me how can I store these dynamic attributes to the PostgreSQL database using PutSQL processor ? Is this the correct expression I am putting in PutSQL? INSERT INTO alstom_radioscopy_amsterdam_blue."xml_log"(block_id,kp_begin,kp_end) VALUES (${block_id.text()},${kp_begin.text()},${kp_end.text()});
... View more
03-08-2020
09:45 AM
Hi @Kart,
As this is a thread which was marked 'Solved' over three years ago, you would have a better chance of receiving a resolution by posting a new question. This will also present you with the opportunity to include details specific to your environment that could aid other members in providing a more relevant answer to your question.
... View more
03-03-2020
01:25 PM
1 Kudo
@asfou NiFi does not contain any processors that support Hive version 2.x. The latest versions of Apache NiFi offer Hive 1.x and Hive 3.x client based processor components. To support Hive 2.x version, you may need to build your own custom processors built using the Hive 2.x client. Matt
... View more
02-18-2020
08:03 AM
Hi All,
I am facing issue still after adding below both hive-site.xml and core-site.xml.
<property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property>
I am facing below error
org.apache.commons.dbcp.SQLNestedException: Cannot create JDBC driver of class 'org.apache.hive.jdbc.HiveDriver' for connect URL jdbc:hive2://ux329tas101.ux.hostname.net:10000/default;principal=<principal name>;ssl=true
Could you please help me regarding this.
Regards,
Swadesh Mondal
... View more
02-14-2020
05:55 AM
For small XML files, you could use ExtractText to get the entire content into an attribute, then UpdateRecord with an XMLReader, adding a property for your new field (let's say "/content") and whatever writer you wish. You will have to specify the output schema for the writer, to include all the fields parsed by the XMLReader in addition to a CLOB/BLOB/String "content" field. If you want to exclude fields from the XML then just exclude them from the output schema. Then you can use a similar Reader (AvroReader if you use an AvroRecordSetWriter, e.g.) in PutDatabaseRecord. If this doesn't work for your use case, you may need to use SplitXml and work on individual records. This will degrade the performance of PutDatabaseRecord unless you merge the records back together later (using MergeRecord for example).
... View more
02-11-2020
06:00 AM
2 Kudos
That error indicates that your CSVReader is not treating the first line as a header, and thus even if you specify an explicit schema, if you don't set Treat First Line As Header to true, the reader will think the header line is a data line, and when it tries to parse it as data (numbers, e.g.) it will fail.
... View more