Member since
06-08-2017
1049
Posts
518
Kudos Received
312
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 11217 | 04-15-2020 05:01 PM | |
| 7117 | 10-15-2019 08:12 PM | |
| 3103 | 10-12-2019 08:29 PM | |
| 11468 | 09-21-2019 10:04 AM | |
| 4331 | 09-19-2019 07:11 AM |
05-29-2019
02:27 AM
@OS As of now I don't think it's possible to parameterize Scheduling in NiFi.
... View more
05-25-2019
12:50 AM
@OS Yes its possible starting from NiFi-1.7, introduced DatabasePoolLookup. Please refer to this answer as i have answered for similar thread.
... View more
05-24-2019
04:36 AM
@Yasuhiro Shindo Container killed exit code most of the time is due to memory overhead If you haven't specified spark.yarn.driver.memoryOverhead or spark.yarn.executor.memoryOverhead these params in your spark submit then add these params (or) if you have specified then increase the already configured value. Please refer to this link to decide overhead value.
... View more
05-24-2019
03:49 AM
1 Kudo
@Alvarez Rafa QueryDatabaseTable processor stores the state when processor ran for the first time based on Max value column(idmovil). For the next run processor only pulls the changes from the table based on idMovil column. You can check the last state value by RightClick on processor -> View State To clear the state stop the processor and RightClick on processor -> View State and then clear state. Once we clear the state then processor pulls all records from the table. --- If you are not facing this issue then please attach your flow screenshot and scheduling on QueryDatabaseTable processor.
... View more
05-16-2019
04:13 AM
@Jeeva Jeeva Try with the below queries: select count(*) from <db>.<tab_name>
where date in (select max(date) from <db>.<tab_name> --get max date from table) (or) select count(*) from <db>.<tab_name>
where date = (select max(date) from <db>.<tab_name>)
... View more
05-15-2019
01:35 AM
@Aman Rastogi Could you once make sure you have enabled hive support while creating spark session. Ex: enabling hive support val spark = SparkSession
.builder()
.appName("app_name")
.config("spark.sql.warehouse.dir", warehouseLocation)
.enableHiveSupport()
.getOrCreate()
Refer to this link for more details regards enableHiveSupport.
... View more
05-12-2019
03:57 PM
@Jacob Paul Try to increase the kryoserializer buffer value after you initialized spark context/spark session. change the property name spark.kryoserializer.buffer.max to spark.kryoserializer.buffer.max.mb conf.set("spark.kryoserializer.buffer.max.mb", "512") Refer to this and this link for more details regards to this issue.
... View more
05-08-2019
02:39 AM
@HanYan Tan Could you look into zeppelin-4140 jira is reported for the same issue. Checkout the comments associated with the jira for more details, as mentioned in jira comments "jdbc interpreter Binding Mode Per User Scoped and isolated mode per user the temporary table is dropped" Try with the above setting and check does the issue resolved or not 🙂
... View more
05-02-2019
12:27 AM
1 Kudo
@Raj Negi Use nifi expression language to replace function to replace "[ and ]" with [ and ] ${'$1':unescapeJson():replace('"[','['):replace(']"',']')}
... View more
05-01-2019
12:06 AM
1 Kudo
@Raj Negi After FetchHbaseRow processor use ReplaceText processor with below configs: Search Value (?s)(^.*$) Replacement Value ${'$1':unescapeJson()} //capture all the data and apply nifi expression language unescapeJson function. Character Set UTF-8 Maximum Buffer Size 1 MB //change as per your flowfile size Replacement Strategy Regex Replace Evaluation Mode Entire text Flow: 1.FetchHbaseRow
2.ReplaceText
--other processors
... View more