Member since
11-16-2015
905
Posts
665
Kudos Received
249
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 423 | 09-30-2025 05:23 AM | |
| 744 | 06-26-2025 01:21 PM | |
| 637 | 06-19-2025 02:48 PM | |
| 841 | 05-30-2025 01:53 PM | |
| 11348 | 02-22-2024 12:38 PM |
12-01-2017
03:05 PM
1 Kudo
It is highly recommended to not put your JDBC driver JAR(s) in NiFi's lib/ directory, as they can disturb the behavior of the other components in the system. I recommend a separate path containing the driver JAR and all its dependencies in a flat directory. Also on Windows you may need to use the URL style "file://C/" or just "/" instead of "C:\" but I'm not sure about that part. Another caveat with Hive drivers is that some (including the official Apache Hive JDBC driver that comes with NiFi's Hive bundle) do not support all JDBC methods, such as setQueryTimeout(), or have different mechanisms for getting at table metadata (from the ResultSetMetaData rather than DatabaseMetaData or vice versa) than what is used by ExecuteSQL. Those reasons are why there are SelectHiveQL and PutHiveQL processors, so the Hive driver could be included and the processors can perform any driver-specific functions/workarounds as necessary. So you may find that the SQL processors do not work with your Hive driver, but I am not familiar with that driver so I can't say for sure. If you see errors such as "Method not supported", then this usually indicates the scenario I'm talking about.
... View more
11-30-2017
06:27 PM
After ExecuteStreamCommand, you'll want an UpdateAttribute processor to set "filename" to "${filename}.csv", or a slightly more complicated expression if you are trying to replace the .xml extension with .csv rather than just appending .csv
... View more
11-29-2017
06:20 PM
Yeah the JOLT DSL can be confusing at times. Here's a Chain Spec that does what you describe above, so you can replace your processors with JoltTransformJSON: [
{
"operation": "shift",
"spec": {
"Name": "metric",
"Timestamp": "&"
}
},
{
"operation": "default",
"spec": {
"tags": {
"c": "d",
"a": "b"
}
}
}
]
... View more
11-28-2017
06:53 PM
Looks like it might just have been a typo between "flowFile" and "flowfile"
... View more
11-28-2017
06:42 PM
What version of NiFi are you using? Is the "value" column in your database table a String or a Float/Double? What processor(s) are you using to read from the database? If using ExecuteSQL, could you do something like the following? SELECT metric, CAST(value AS DOUBLE) AS value, timestamp, tags from myTable Alternatively, as of NiFi 1.2.0 (HDF 3.0) you can use the JoltTransformJSON processor to do type conversion (see an example here). Also if you know what the schema is supposed to be, you could use ConvertRecord with a JsonRecordSetWriter which is associated with the "correct" schema. The reader can be an AvroReader which uses the Embedded Schema.
... View more
11-28-2017
06:37 PM
Is it possible to share your nifi-app.log on this question? Also, does this driver work from other utilities (Squirrel SQL, e.g.)?
... View more
11-27-2017
04:21 PM
Is there anything else underneath that stack trace in nifi-app.log? There is usually a Caused By with a ClassNotFoundException or something like that.
... View more
11-16-2017
04:53 PM
You can do it without a schema registry, if your readers and writers "Use 'Schema Text' Property" and you hardcode the schema into the Schema Text property. Since you're using the same for both reader and writer, it's easier to maintain in a registry, but only a simple copy-paste if you don't want to use the registry.
... View more
11-16-2017
02:49 PM
3 Kudos
It appears you want to set the destination path to the value of type, followed by the value of id, followed by data.txt, and in the content of that file you want the single-element JSON array containing the object that provided the values. If that is the case: As of NiFi 1.3.0, there is a PartitionRecord processor which will do most of what you want. You can create a JsonReader using the following example schema: {"type":"record","name":"test","namespace":"nifi",
"fields": [
{"name":"type","type":"string"},
{"name":"id","type":"string"},
{"name":"content","type":"string"}
]
} You can also create a JsonRecordSetWriter that inherits the schema (as of NiFi 1.4.0) or uses the same one (prior to NiFi 1.4.0). Then in PartitionRecord you would create two user-defined properties, say record.type and record.id, configured as follows: Given your example data, you will get 4 flow files, each containing the data from the 4 groups you mention above. Additionally you have record.type and record.id attributes on those flow files. You can route them to UpdateAttribute where you set filename to data.txt and absolute.path to /${type}/${id}. Then you can send them to PutHDFS where you set the Directory to ${absolute.path}.
... View more
11-16-2017
02:17 PM
1 Kudo
Thanks very much! I hope to write another series for InvokeScriptedProcessor, ScriptedReportingTask, ScriptedReader, and ScriptedRecordSetWriter someday 🙂
... View more