Member since
09-29-2015
63
Posts
19
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1733 | 09-30-2017 06:13 AM | |
1408 | 06-09-2017 02:31 AM | |
5136 | 03-15-2017 04:04 PM | |
5245 | 03-15-2017 08:37 AM | |
1488 | 12-11-2016 01:15 PM |
10-16-2023
11:05 AM
getting same error in invoke http method. though request processing successfully getting this error in nifi ui
... View more
05-11-2021
05:09 AM
1 Kudo
Thanks @VidyaSargur - I just started a new thread, per your suggestion.
... View more
02-11-2021
06:22 AM
I think this was due to non runnning metstore service from hive. You should run command "hive --service metastore & " first and then start hive console.
... View more
10-09-2017
09:16 PM
My bad, thanks for the info , I assumed falcon and oozie behaviour would be the same.
... View more
10-25-2018
10:29 AM
@Santhosh B Gowda Why would you do that??? Why won't you just curl from the shell action? A part of me would die if I do this in my workflow!
... View more
09-23-2017
09:23 AM
@nyakkanti I do not see any pending jobs in YARN. Only 3 jobs are listed and the status is finished\succeeded. In Ambari, I can still see the 4 queries running as mentioned in the original post.
... View more
05-05-2017
06:17 PM
@nyakkanti Only processor properties that support the NiFi expression language can be configured to use FlowFile attributes. Some properties (even if the processor property says supports expression language: false, you may be able to use a simple ${<FlowFile-attribute-name>} to return a value from a attribute key/value pair. But this is never true for sensitive properties such as password properties. Password properties are especially difficult since those values are encrypted upon apply. So if you entered ${<FlowFile-attribute-name>} , that is what gets encrypted. The processor then decrypts that when making the connection. Nowhere in that process does it ever retrieve a value from the FlowFile. Thanks,
Matt
... View more
03-15-2017
12:43 PM
1 Kudo
@nyakkanti FlowFiles consist of FlowFile attributes and FlowFile content. - FlowFile attributes are kept in heap during processing and persisted to the FlowFile repository. - FlowFile content is kept in claims within the content repository. A claim is moved is moved to archive once their no longer exists any FlowFiles still active anywhere in your dataflow pointing at it. Archiving is enabled by default but can be disabled in the nifi.properties file: nifi.content.repository.archive.enabled=true If you disable archiving, the claim is purged from NiFi's content repository rather the being archived. What is important to understand is how claims work. By default in the nifi.properties file, claims can contain up to 100 FlowFiles or a min 10 MB of data (whichever occurs first). So a claim will not be purged until every piece of content in that claim has completed processing. As long as just one piece of content in that claim is still referenced, the entire claim will still exist in the content repository. As far as FlowFile attributes are concerned, they are persisted in NiFi provenance based on the configured retention in the nifi.properties file. You can perform provenance searches within NiFi to return FlowFile history and look at the attributes of those FlowFiles at any point int their lineage. Thanks, Matt
... View more
03-15-2017
04:04 PM
Deleted the journal folder and restarted Nifi did solve the issue. Not sure what caused the issue in the first place.
... View more
03-21-2017
06:00 PM
1 Kudo
Analysis:
As per oracle -
Oracle Database 8i and earlier versions did not support TIMESTAMP
data, but Oracle DATE data used to have a time component as an
extension to the SQL standard. So, Oracle Database 8i and earlier
versions of JDBC drivers mapped oracle.sql.DATE to java.sql.Timestamp
to preserve the time component. Starting with Oracle Database 9.0.1,
TIMESTAMP support was included and 9i JDBC drivers started mapping
oracle.sql.DATE to java.sql.Date. This mapping was incorrect as it
truncated the time component of Oracle DATE data. To overcome this
problem, Oracle Database 11.1 introduced a new flag
mapDateToTimestamp. The default value of this flag is true, which
means that by default the drivers will correctly map oracle.sql.DATE
to java.sql.Timestamp, retaining the time information. If you still
want the incorrect but 10g compatible oracle.sql.DATE to java.sql.Date
mapping, then you can get it by setting the value of
mapDateToTimestamp flag to false. Ref link is here. Solution: So as instructed by oracle provide property jdbc.oracle.mapDateToTimestamp as false - Class.forName("oracle.jdbc.driver.OracleDriver")var info : java.util.Properties=new java.util.Properties()
info.put("user", user)
info.put("password", password)
info.put("oracle.jdbc.mapDateToTimestamp","false")val jdbcDF = spark.read.jdbc(jdbcURL, tableFullName, info)
Add Oracle database connector jar which supports "oracle.jdbc.mapDateToTimestamp" flag is ojdbc14.jar Hope it helps!
... View more