Member since
11-16-2015
892
Posts
650
Kudos Received
245
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5672 | 02-22-2024 12:38 PM | |
1389 | 02-02-2023 07:07 AM | |
3091 | 12-07-2021 09:19 AM | |
4208 | 03-20-2020 12:34 PM | |
14168 | 01-27-2020 07:57 AM |
01-10-2019
03:12 PM
Thanks Matt for pointing this out. Seems I had read the documentation too quickly : From executeSQL usage : sql.args.N.type Incoming FlowFiles are expected to be parametrized SQL statements. The type of each Parameter is specified as an integer that represents the JDBC Type of the parameter. sorry !
... View more
11-06-2018
07:23 AM
So along the flow I have an attribute, which is called asin (aka product ID). The flow should fail when an ID already exists in a local csv file. With your help I changed my code to #!/usr/bin/python
import os
import java.io
flowFile = session.get()
productFound = False
if (flowFile != None):
if os.path.exists('/home/nifi/products.csv'):
asin = flowFile.getAttribute('asin')
with open('/home/nifi/products.csv') as csv:
if asin in csv.read():
productFound = True
if productFound:
session.transfer(flowFile, REL_FAILURE)
else:
session.transfer(flowFile, REL_SUCCESS)
I falsely assumed that I could call REL_FAILURE/SUCCESS multiple times.
... View more
05-28-2019
03:40 PM
I have the same problem. I set the permission to 777 for all users. [nifi@hdp-srv2 ~]$ hdfs dfs -ls /warehouse/tablespace/managed/hive/
Found 3 items
drwxrwxrwx+ - hive hadoop 0 2019-05-27 14:53 /warehouse/tablespace/managed/hive/information_schema.db
drwxrwxrwx+ - hive hadoop 0 2019-05-28 13:45 /warehouse/tablespace/managed/hive/sensor_data
drwxrwxrwx+ - hive hadoop 0 2019-05-27 14:53 /warehouse/tablespace/managed/hive/sys.db
Error still happens. Caused by: org.apache.hadoop.hive.metastore.api.MetaException: java.security.AccessControlException: Permission denied: user=nifi, access=READ, inode="/warehouse/tablespace/managed/hive/sensor_data":hive:hadoop:drwxrwxrwx
... View more
10-19-2018
05:54 PM
Your "Database type" property is set to "Generic", try setting it to Oracle (for Oracle < 12) or Oracle 12+.
... View more
05-02-2019
08:28 PM
Hi Matt, I am new to nifi and i have similar usecase where i want to insert entire flow file content as clob value it throws error when i just use replace text and putSQL processors and i don't have any idea on how to handle it using putDatabaseRecord. Can you provide any example on this one.
... View more
08-30-2018
08:58 AM
Thanks a lot @Matt Burgess , By adding reference self to the variable as 'self.total', It did miracle. code snippet is working absolutely fine as expected. It really helps me a lot. import traceback
from org.apache.nifi.processors.script import ExecuteScript
from org.apache.nifi.processor.io import StreamCallback
from java.io import BufferedReader, InputStreamReader, OutputStreamWriter
class ConvertFiles(StreamCallback) :
def __init__(self) :
pass
def process(self, inputStream, outputStream) :
try :
self.total = 0
reader = InputStreamReader(inputStream,"UTF-8")
bufferedReader = BufferedReader(reader)
writer = OutputStreamWriter(outputStream,"UTF-8")
line = bufferedReader.readLine()
while line != None:
ChangedRec = line.upper()
writer.write(ChangedRec)
writer.write('\n')
a=line.split(",")
for valu in a:
b=valu.strip()
self.total += int(b)
line = bufferedReader.readLine()
print("Summation of Records are %s ",self.total)
writer.flush()
writer.close()
reader.close()
bufferedReader.close()
except :
print "Exception in Reader:"
print '-' * 60
traceback.print_exc(file=sys.stdout)
print '-' * 60
raise
session.transfer(flowFile, ExecuteScript.REL_FAILURE)
finally :
if bufferedReader is not None :
bufferedReader.close()
if reader is not None :
reader.close()
flowFile = session.get()
if flowFile is not None :
ConvertFilesData = ConvertFiles()
session.write(flowFile, ConvertFilesData)
flowFile = session.putAttribute(flowFile, "FileSum",str(ConvertFilesData.total))
session.transfer(flowFile, ExecuteScript.REL_SUCCESS) Snapshot Result:
... View more
08-20-2018
01:53 PM
1 Kudo
@CHEH YIH
LIM
I think you are using extract text processor to extract the content and keep as attribute to the flowfile if yes then change Maximum Buffer Size 1 MB Specifies the maximum amount of data to buffer (per file) in order to apply the regular expressions. Files larger than the specified maximum will not be fully evaluated. Maximum Capture Group Length 1024 Specifies the maximum number of characters a given capture group value can have. Any characters beyond the max will be truncated. These two property values as per your flowfile size. - If the Answer helped to resolve your issue, Click on Accept button below to accept the answer, That would be great help to Community users to find solution quickly for these kind of issues.
... View more
08-07-2018
05:42 PM
1 Kudo
Translate Field Names "normalizes" the column names by uppercasing them, but also by removing the underscores, which should explain why TEST_ID isn't matching, but I can't tell why STRING isn't matching. Can you try setting the field names in the schema to their uppercase counterparts, as well as the keys in the JSON file? For JSON inputs, you can also use JoltTransformJSON (for a flat JSON file of simple key/value pairs) check out this spec which lowercases the field names, you can change the modify function to =toUpper instead of =toLower.
... View more
07-30-2018
12:31 PM
Hi Matt, Thanks for feedback. Indeed seems there was the issue on version 1.6. I made a check on version 1.7.1 and everything works fine with double quotes.
... View more