Member since
06-26-2015
511
Posts
137
Kudos Received
114
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1395 | 09-20-2022 03:33 PM | |
4044 | 09-19-2022 04:47 PM | |
2336 | 09-11-2022 05:01 PM | |
2454 | 09-06-2022 02:23 PM | |
3863 | 09-06-2022 04:30 AM |
08-29-2022
04:04 PM
@yagoaparecidoti , You can do the reverse with the command "SHOW ROLE GRANT GROUP group name;". I don't think there's a command to do exactly what you need, but you can query the database directly: select
r.ROLE_NAME,
g.GROUP_NAME
from
SENTRY_GROUP g
join SENTRY_ROLE_GROUP_MAP rg on rg.GROUP_ID = g.GROUP_ID
join SENTRY_ROLE r on r.ROLE_ID = rg.ROLE_ID
order by
r.ROLE_NAME,
g.GROUP_NAME
; Cheers, André
... View more
08-29-2022
03:54 PM
@yagoaparecidoti , Do you know the passwords for the users livy and livy-http? Can you manually kinit with those 2 users from the command line? Can you also check in AD what's the value for userPrincipalName property of those two users and share it here? Cheers, André
... View more
08-28-2022
04:17 PM
@hegdemahendra , You need to set the time characteristic of the stream for it to work. For example, try setting it to processing time, as shown below: DataStream<String> matechedStream = patternStream
.inProcessingTime()
.process(new PatternProcessFunction<String, String>() {
@Override
public void processMatch(Map<String, List<String>> map, Context context, Collector<String> collector) throws Exception {
collector.collect(map.get("start").toString());
}
}
); Cheers, André
... View more
08-28-2022
03:45 PM
@ramesh0430 , What's the throughput that you are expecting? What's your processor configurations? Cheers, André
... View more
08-28-2022
03:44 PM
@Vinay91 , Do you see any errors in the nifi-app.log file? Do you have a subscription with Cloudera Support? This is something the support team could help you quickly resolve. Cheers, André
... View more
08-28-2022
03:40 PM
@belka , No, Apache Druid is not currently supported. Cheers, André
... View more
08-27-2022
06:05 PM
@learncloud1111 , All vulnerabilities regarding log4j have already been fixed/addressed by Cloudera in CDP 7.1.7 SP1. You should not need to fix anything else on your own. Cheers, André
... View more
08-27-2022
06:00 PM
@spserd , Do these error appear only once in the logs or are they recurring? What's the command line you used to run the job? Cheers, André
... View more
08-26-2022
09:59 PM
@Omarb , Initially I thought this was a problem with the CSVRecordSetWriter, but I was mistaken. The issue here is that even though your CSVReader is set to ignore the header line, it has Schema Access Strategy set to "Infer Schema", and this will cause the reader to consume the first line of the flow file to infer the schema, even though the other property tells it to ignore it. To avoid this, set the Schema Access Strategy property to "Use 'Schema Text' Property" and provide a schema that matches your flowfile structure. For example: "type": "record",
"name": "MyFlowFile",
"fields": [
{ "name": "col_a", "type": "string" },
{ "name": "col_b", "type": "string" },
{ "name": "col_c", "type": "string" },
...
]
} This will stop the first line being "consumed" by the reader. Cheers, André
... View more
08-26-2022
09:07 PM
@Griggsy , I don't know if there's a way to do exactly that with either JOLT or ReplaceText processors. You can do it with ExecuteScript and the following Python script, though: from org.apache.commons.io import IOUtils
from java.nio.charset import StandardCharsets
from org.apache.nifi.processor.io import StreamCallback
import json
import re
def to_snake_case(name):
return re.sub(r'(?<!^)(?=[A-Z])', '_', name).lower()
def convert_keys_to_snake_case(obj):
if isinstance(obj, list):
return [convert_keys_to_snake_case(o) for o in obj]
else:
return {to_snake_case(k): v for k, v in obj.items()}
class PyStreamCallback(StreamCallback):
def __init__(self):
pass
def process(self, inputStream, outputStream):
text = IOUtils.toString(inputStream, StandardCharsets.UTF_8)
converted = convert_keys_to_snake_case(json.loads(text))
outputStream.write(bytearray(json.dumps(converted).encode('utf-8','ignore')))
flow_file = session.get()
if flow_file != None:
flow_file = session.write(flow_file, PyStreamCallback())
session.transfer(flow_file, REL_SUCCESS) Cheers, André
... View more