Member since
08-14-2018
5
Posts
0
Kudos Received
0
Solutions
04-15-2020
07:46 AM
Good Afternoon, we need to build a flow which extracts data from a SQL database and based on the content we define the filename as (name)_(minimum value)_(maximum value)_(time of extract).json. Apart from this, the content of the file would need to be compressed to reduce network traffic. Any idea how this can be accomplished? I was trying to use query record to determine the minimum and maximum but the original data would be replaced by the min and maximum and attempting to merge the records can be a bit problematic since we don't have control of the other part of the flow. I read about wait and notify processors but this would require other applications and we would like to stick with native Nifi processors/applications. Thanks a lot for your assistance.
... View more
- Tags:
- NiFi
Labels:
02-11-2019
07:08 PM
I'm encountering the above error when using inner queries. I don't know if it's related but when using standard queries with no inner queries and table name references the query works fine. To ensure that there is nothing wrong with my queries these were also tested on DBeaver with positive results. Can anyone help on how I can remove this error when querying from ODBC
... View more
Labels:
08-20-2018
09:12 AM
@Avinash thanks for your reply. This was solved by using the same driver as we had on Production.
... View more
08-15-2018
07:22 AM
@Herald, the only thing being modified is the ODBC connection. Basically on QA we have the same configuration (host, schema, and authentication) as we have on Production. The only thing which is different is the driver version. The SSIS package is built in such a way as to minimize user manipulation and only the connection strings can be modified. @Avinash, do I need to set hive.resultset.use.unique.column.names=false; on the ODBC configuration? Or it's a configuration on the Hive server?
... View more
08-14-2018
06:16 PM
Hi All, We have an SSIS package which is consuming data from an ODBC connection based on the Hortonworks Hive driver. We have noticed that when we use the DSN connection from our production environment the package works fine, but when calling it from an other environment it is changing the query result set. More specifically it is adding the table qualifier to the columns. We have set both ODBC connections with the same configuration, but we're still facing this issue in our lower environments. Example:On production we get "column" while on QA we get "table.column". Could this be an issue with the default driver settings? We're using v2.1.5.1006 on Production and v2.1.6.1023 on QA. Or is there an ODBC setting to remove the qualifier from being added to the column name? Thanks for your assistance.
... View more