Member since
06-26-2015
515
Posts
138
Kudos Received
114
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2261 | 09-20-2022 03:33 PM | |
| 6016 | 09-19-2022 04:47 PM | |
| 3241 | 09-11-2022 05:01 PM | |
| 3704 | 09-06-2022 02:23 PM | |
| 5776 | 09-06-2022 04:30 AM |
07-05-2022
03:21 PM
1 Kudo
@Neera456 , Hard to say with the little information provided. It could be a lot of things. Try restarting the Cloudera Manager from the command line (sudo systemctl restart cloudera-scm-server) and see if the problems go away. If not, you'll have to look for the root cause in the logs (CM's or HDFS's) Cheers, André
... View more
07-05-2022
06:49 AM
I believe that the HDFS bad state is not related to the permissions set by the canary test. The problem seems to be related to the process to kerberize your cluster. It seems that something didn't work correctly and your 3 data nodes are listed as dead in the SMON log. To use the command line after kerberos you need first to authenticate using the knit command. Cheers Andre
... View more
07-05-2022
05:17 AM
@dida The following connector configuration worked for me. My schema was stored in Schema Registry and the connector fetched it from there. {
"connector.class": "com.cloudera.dim.kafka.connect.hdfs.HdfsSinkConnector",
"hdfs.output": "/tmp/topics_output/",
"hdfs.uri": "hdfs://nn1:8020",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"name": "asd",
"output.avro.passthrough.enabled": "true",
"output.storage": "com.cloudera.dim.kafka.connect.hdfs.HdfsPartitionStorage",
"output.writer": "com.cloudera.dim.kafka.connect.hdfs.parquet.ParquetPartitionWriter",
"tasks.max": "1",
"topics": "avro-topic",
"value.converter": "com.cloudera.dim.kafka.connect.converts.AvroConverter",
"value.converter.passthrough.enabled": "false",
"value.converter.schema.registry.url": "http://sr-1:7788/api/v1"
} Cheers, André
... View more
07-05-2022
02:46 AM
@stale , Could you please also share the output of this? hdfs dfs -ls / Cheers, André
... View more
07-05-2022
02:43 AM
@stale What did you do to fix the Kerberos issue? Would you be able to share the SERVICE_MONITOR log under /var/log/cloudera-scm-firehose? Cheers, Andre
... View more
07-03-2022
11:01 PM
2 Kudos
@hegdemahendra , This looks like a regression to me. Could you please open a bug Jira on https://issues.apache.org/jira/projects/NIFI/ to report this issue? A workaround for this would be to change your code like below: log.info(String(xyzId)); Cheers André
... View more
07-03-2022
10:26 PM
@pk87 , The HandleHttpRequest processor will only produce a flowfile when an HTTP call is received on its port. If you want this to be called every 60 minutes you can add an InvokeHTTP processor, scheduled to run every 60 minutes, that will call your API endpoint in the same way that Postman does. You actually don't need the HandleHttpProcessor to run this on a regular basis. You could possibly replace that with a GenerateFlowFile that executes every 60 minutes and will trigger the process for you. Cheers, André
... View more
06-30-2022
03:58 PM
@snm1523 , Sorry, I don't remember either. Unfortunately I don't have a cluster handy now to confirm this. Cheers, André
... View more
06-29-2022
06:14 PM
@MattWho , No sure this is what's happening here, but if the disk is filled with other stuff that's outside of NiFi's control and the overall disk usage is still hitting the configured NiFi limits, the same thing would happen, right? André
... View more
06-29-2022
01:41 AM
@roshanbi , You must configure your Kafka consumer to use a consumer group and enable offset commits. This way the client will periodically save the last read offset internally in Kafka so that it can pick up from where it left off upon restarts. Please check the Kafka documentation for the meaning of the properties below: group.id enable.auto.commit auto.offset.reset Cheers, André
... View more