Member since
10-08-2018
35
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2429 | 10-19-2018 08:32 AM |
04-13-2020
11:42 PM
And also we've found that this problem is an official bug of ExecuteSQL in Apache NiFi 1.7.1 version -> it fixed in 1.10 and later versions.
... View more
04-13-2020
11:06 PM
A little bit later we've found that we can cast the date type data just to timestamp without any manipulation with timezones and it will ride into a date type column with correct data. Example: select deal_date::timestamp as deal_date from schema_name.table_name But if think that in this case you can change column's data type in the table as well and it would be fine - you wrong -> In this case your date data become incorrect again )
... View more
04-11-2020
03:34 PM
1 Kudo
I've found a solution, but I think it's not a better way to deal with this. I've added an option near date type column in sql query (for pgsql) like: select deal_date at time zone 'UTC-6' as deal_date from schema_name.table_name Why UTC-6 - for equality with 00h00m00s of my country (I've found this by empiric way) then I put this timestamp data to pgtable in column with date data type and that's was fine. If you have the best way to deal with this issue just let me know
... View more
04-11-2020
02:31 PM
I think that this is a bug of 1.7.1 version of Execute Sql proc because I've tried to emulate this situation on 1.11.1 NiFi version and have no any problem with date. But I still have no idea how to solve this issue in 1.7.1 version by normal way.
... View more
04-11-2020
06:24 AM
Hello! I've ran into a problem with use Execute SQL NiFi (version 1.7.1) processor . The problem is in how this processor extracts a date data ----- from a database table. When we use a setting Use Avro Logical Type with true mode then it takes a data from db table with underlying format (and for future needs a mode = true would be a better solution for me), BUT with this mode we have a problem with date data type - it means that it takes this date and do minus one day from this day. For better understanding I'll show you an example with a screenshots. 1. Select from db table = true result 2. Execute SQL proc with Avro Logical Type = true -> gives underlying data format from db table - it's ok for our needs 3. Same sql query like in db client 4. But WRONG Date (minus 1 day) in outgoing flow file Then if we change Avro Logical Mode to = false we'll see that date would be right BUT data type would be a STRING -> but this way isn't right and convenient for me. 5. Execute SQL proc with Avro Logical Type = false -> converts data from db to STRING data type - it's not ok for our needs 6. But gives for us the right data like from db client (the sql query didn't change) But when I did cast the date type data to timestamp type my suggestion about timezone was totally confirmed! 7. SQL query with cast to timestamp 8. Time zone -3 hours (our countries true timezone on the Apache NiFi server) Here's a question - How can I solve this problem without making some crutches ) Thanks!
... View more
Labels:
- Labels:
-
Apache NiFi
06-11-2019
01:06 PM
I have solved this task by TransformXML proc remove soap envelope by .xsl. But if ReplaceText proc also can solve this task I'd like to see any suggestions here. Thanks!
... View more
06-10-2019
04:26 PM
Hi for all! I don't get why this regex doesn't work in ReplaceText proc NiFi: Search Value ^<env.*"><env:Body>(.*)<\/env:Body><\/env:Envelope>$ text like: <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Body><hbdhsbfsdhbfdhdfshdfh>vsaucvash </env:Body></env:Envelope> I need to cut this - <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Body> and this - </env:Body></env:Envelope> ^<env.*"><env:Body>(.*)<\/env:Body><\/env:Envelope>$ - works for text like at the top just separately: like this ^<env.*"><env:Body>(.*) and cut <env:Envelopexmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Body> like this (.*)<\/env:Body><\/env:Envelope>$ and cut </env:Body></env:Envelope> But I need to cut all together in one processor. How could I do that?
... View more
Labels:
- Labels:
-
Apache NiFi
06-07-2019
02:29 PM
Thanks,Matt! Sorry for a late answer 🙂
... View more
02-20-2019
07:48 AM
picture attach
... View more
02-20-2019
07:44 AM
Hi for all ! Is anybody know why processor's thread info (up right corner) turn in red and have an info about - "0 active threads (1 terminated) "? The processor works fine but this red info makes me to warn about. How it is critical and how can I fix this? Thanks! Screenshot in an attach.
... View more
Labels:
- Labels:
-
Apache NiFi
02-20-2019
07:34 AM
I've all ready have an answer by my own 🙂 I'll try to public tutorial a little bit later
... View more
02-01-2019
04:58 PM
Hi,folks! Please tell me how I need to parse this XML below with .xsd file with ValidateXML processor and .xsl file with TransformXML processor then send all of those attributes into relational DB with PutSQL and get the final table like (below)?: |CNUM|clientName|qualInvestor|agreementNum|agreementService|agreementDate| => there are column names |AAAAA|Batman Fargo| true | 33067 | blablabla | 2018-10-01 | => column values in row |AAAAA|Batman Fargo| true | 55121 | blablabla | 2018-10-01 | => column values in row |AAAAA|Batman Fargo| true | 60012 | blablabla | 2018-10-01 | => column values in row Sorry for this "nice" table 🙂 <?xml version="1.0" encoding="UTF-8"?>
<clientList xmlns="http:blablabla" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="blablabla ClientList.xsd">
<client>
<CNUM>AAAAA</CNUM>
<clientName>Batman Fargo</clientName>
<qualInvestor>true</qualInvestor>
<agreementList>
<agreement>
<agreementNum>33067</agreementNum>
<agreementService>blablabla</agreementService>
<agreementDate>2018-10-01</agreementDate>
</agreement>
<agreement>
<agreementNum>55121</agreementNum>
<agreementService>blablabla</agreementService>
<agreementDate>2018-10-01</agreementDate>
</agreement> <agreement>
<agreementNum>60012</agreementNum>
<agreementService>blablabla</agreementService>
<agreementDate>2018-10-01</agreementDate>
</agreement>
</agreementList>
</client>
</clientList>
... View more
Labels:
- Labels:
-
Apache NiFi
11-21-2018
12:17 PM
@Matt Clarke Yes, I see these logs in my nifi-app.log, but I don't need some changes in default logback.xml for this 🙂 Maybe it's all about NiFi version. When we'll migrate from NiFi 1.7.1 to 1.8.0 I try to do this task again 🙂 Thank you Matt!
... View more
11-19-2018
12:46 PM
Hi @Matt Clarke ! I did like you advised (removing or commenting out the STATUS_FILE appender and just have your loggers use the default appender (nifi-app.log) to see if any output is created.) and I don't I understand what kind of new info I must to see? But at the beginning when I tried to configure these separate Connections logs I thought that I could monitoring the queues by those -> but today when I saw this Connections logs (before that I created some queue issue) I don't get it - where in this logs I can find some kind of value that tells me something about my queue issue. I searched, grep by Connector ID and etc - but I couldn't find anything. My version of NIFI is nifi-1.7.1-RC1
... View more
11-19-2018
10:17 AM
Hi @Matt Clarke ! I did like you advised (removing or commenting out the STATUS_FILE appender and just have your loggers use the default appender (nifi-app.log) to see if any output is created.) and I don't I understand what kind of new info I must to see? But at the beginning when I tried to configure these separate Connections logs I thought that I could monitoring the queues by those -> but today when I saw this Connections logs (before that I created some queue issue) I don't get it - where in this logs I can find some kind of value that tells me something about my queue issue. I searched, grep by Connector ID and etc - but I couldn't find anything. My version of NIFI is nifi-1.7.1-RC1
... View more
11-16-2018
05:39 PM
@Matt Clarkelogback.txt
... View more
11-16-2018
05:38 PM
@Matt Clarke Ok
... View more
11-16-2018
05:36 PM
cat logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration scan="true" scanPeriod="30 seconds">
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<immediateFlush>true</immediateFlush>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="USER_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user_%d.log</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- Test configuration for ControllerStatusReportingTask 2018-11-07 by AL -->
<appender name="STATUS_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-status.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-status_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.processors.standard.LogMessage" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />
<logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />
<logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
<logger name="org.apache.curator.ConnectionState" level="OFF" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="INFO"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="org.glassfish.jersey.internal.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Logger to redirect controller status/reporting task Connections output to a different file. -->
<logger name="org.apache.nifi.controller.ControllerStatusReportingTask.Connections" level="INFO" additivity="false">
<appender-ref ref="STATUS_FILE"/>
</logger>
<logger name="org.apache.nifi.controller.ControllerStatusReportingTask.Processors" level="INFO" additivity="false">
<appender-ref ref="STATUS_FILE"/>
</logger>
<root level="INFO">
<appender-ref ref="APP_FILE"/>
</root>
</configuration>
... View more
11-16-2018
03:28 PM
@Matt Clarke Nope.Still nothing. And YES I've restarted the nifi service and check the logback.xml for actual config.
... View more
11-16-2018
03:25 PM
@Matt Clarke Nope.Still nothing. And YES I've restarted the nifi service and check the logback.xml for actual config
... View more
11-16-2018
01:23 PM
Hi @Matt Clarke ! Thank you for your detailed answer. I tried 3 or 4 different ways to configure this logs! And yes?I always check from 5 to 10 times my configs before restart nifi service. Here is my permissions and config(bottom): I still have no any file with this configured separate status logs. <!-- Test configuration for ControllerStatusReportingTask 2018-11-07 by AL -->
<appender name="STATUS_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-status.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/var/log/nifi/nifi-status_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- Logger to redirect controller status/reporting task Connections output to a different file. -->
<logger name="org.apache.nifi.controller.ControllerStatusReportingTask.Connections" level="INFO" additivity="false">
<appender-ref ref="STATUS_FILE"/>
</logger>
<logger name="org.apache.nifi.controller.ControllerStatusReportingTask.Processors" level="INFO" additivity="false">
<appender-ref ref="STATUS_FILE"/>
</logger>
... View more
11-15-2018
07:12 AM
I have no any Ambari that's why I need to do this configuration without it. Is anybody know how to do this right?
... View more
11-13-2018
06:11 AM
@Jonathan Sneep No, via Ambari it's too simple :))) I'm trying via command line hard code 🙂 Maybe that's why it doesn't work. If so how can I configure logback.xml by my method (via cli) ? Thanks!
... View more
11-12-2018
12:44 PM
Hi @Jonathan Sneep. Thanks for your answer, but I've allready did like that but still nothing. My steps: 1. Add the config to logback.xml 2. Restart service nifi 3. Do something in NiFi (like start some dataflow) And the result is nothing - No new log file with separate logs which I configure What I'm doing wrong?
... View more
11-09-2018
12:26 PM
Hi guys! Please, see this tutorial - https://community.hortonworks.com/articles/79849/nifi-monitoring-controllerstatusreportingtask-with.html before you read my question. () I tried this method but it doesn't works. A separate log file is not even created. I've all ready tried many more modifications of this configuration but still nothing. What I am doing wrong? The key moment is - "A separate log file is not even created". PS I've already asked this question inside this tutorial but still I have no answer. Thanks for help!
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache NiFi
11-08-2018
08:34 AM
Hi,Jobin George! I tried this method but it doesn't works. A separate log file is not even created. I've all ready tried many more modifications of this configuration but still nothing. What I am doing wrong? The key moment is - "A separate log file is not even created".
... View more
10-19-2018
08:34 AM
I thought like that , but not 🙂 with semicolon would be another type of ora-issue 🙂
... View more
10-19-2018
08:32 AM
1 Kudo
I've fixed! 🙂 See an attach image.Custom Query not needed.
... View more
10-19-2018
08:09 AM
Hi guys! I have an issue in NiFi when I do this query in it SELECT * FROM my_db.my_table - the issue - [ORA-00933: SQL command not properly ended]. I don't understand what I need to do to fix this issue because in sql_client this query works well. I use QueryDatabaseTable processor. Is any suggestion for this? Thanks!
... View more
- Tags:
- nifi-processor
Labels:
- Labels:
-
Apache NiFi
10-10-2018
02:19 PM
Thanks, Andrew! We already implement something like the Notification System that's why we have started to think about bulletin's properties:)
... View more