Member since
06-27-2019
147
Posts
9
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2464 | 01-31-2022 08:42 AM | |
629 | 11-24-2021 12:11 PM | |
1060 | 11-24-2021 12:05 PM | |
1992 | 10-08-2019 10:00 AM | |
2522 | 10-07-2019 12:08 PM |
11-24-2021
02:00 PM
Hi @AshwinPatil If I understood correctly, the question is if topic alter configs will take precedence over the broker global settings, right? if yes, then the answer is "yes" if we alter the topic using retention.ms, for example, this will take presence over log.retention.hours specified in the brokers.
... View more
11-24-2021
12:24 PM
@Ani1991 From the documentation: https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/smm-security/topics/smm-securing-streams-messaging-manager.html "If you deploy SMM without security, the login page is not enabled on the SMM UI by default. When you enable Kerberos authentication, SMM uses SPNEGO to authenticate users and allows them to view or create topics within Kafka by administering Ranger Kafka Policies. " This looks like a Kerberos issue with the token in cache in the machine that you're trying to access the SMM UI. Can you try using firefox browser and make sure it's configured properly, documentation for more details: https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_sg_browser_access_kerberos_protected_url.html
... View more
11-24-2021
12:11 PM
Hi @jaeseung For SMM "WARNING": The status of a replication flow is calculated based on the replication latency and the throughput. if any of the metrics are not present -> INACTIVE if latency max and latency age is smaller than a fixed grace period (60 sec), and throughput max is not zero -> ACTIVE if throughput age is smaller than a fixed grace period (60 sec), and throughput max is not zero -> WARNING otherwise -> INACTIVE
... View more
11-24-2021
12:05 PM
Hi @jaeseung The client configurations have to be passed using the cluster alias replication: for consumer configs: primary->secondary.consumer.<config> > for producer configs: primary->secondary.producer.override.<config> Please try using Under SRM configs: <source>-><target>.producer.override.max.request.size=<desired value> If that doesn't work use: <source>-><target>.producer.max.request.size=<desired value>
... View more
04-30-2021
08:41 AM
From Kafka perspective max.poll.records is an upper bound property in the number of messages that can be retrieved in a single poll call, a consumer is constantly consuming messages, for example, imagine that you have a topic and you send 10 messages, if max.poll.records are 10000 then the messages would never be consumed, so for the same reason, this is just an upper bound that's usually configured when the consumers start timing out because the processing of those messages is not happening in the max.poll.interval.ms (default 5 minutes). In summary, consumers are constantly consuming messages (1 or many), and max.poll.records is just an upper bound property used to control the number of messages we can get in each poll call to make sure these messages are processed on time (max.pol.interval.ms). Hope that information clarifies the usage of that property.
... View more
04-30-2021
08:31 AM
I'm afraid that kafka doesn't come with HDFS sink connectorsor or something similar out of the box in HDP 2.6.5 , this is coming from CDP 7.1.1. I believe nifi or spark are alternatives that can be used for this.
... View more
04-30-2021
08:14 AM
I would suggest checking the keystores you're using in the nifi consumer with a simple producer/consumer in the kafka host itself, for example: Create a file called client.properties and add the SSL details, example below: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/configuring-wire-encryption/content/configuring_kafka_producer_and_kafka_consumer.html Then run the consumer and see if the issue is replicated, if yes, you can enable debugging for the client to get more details about the exception. I hope that helps to find the root cause.
... View more
04-30-2021
08:05 AM
Support for Kafka connect is added in CDP 7.1.1, please review the documentation below: https://docs.cloudera.com/runtime/7.1.1/release-notes/topics/rt-whats-new-kafka.html
... View more
04-30-2021
07:54 AM
If the connector is running on top of CDP, you can check the log files under /var/log/kafka or /run/cloudera-scm-agent/process/xxxxxxxx-kafka-KAFKA_CONNECT/logs If this is standalone kafka connect and no details in any log file, I would suggest adding the below JVM property to the process: -XX:ErrorFile=targetDir/hs_err_pid_%p.log The above property is to create a file when the JVM crash, when this happens no details are added to the log files. Hope that helps to find the root cause.
... View more
04-30-2021
07:44 AM
If I understood correctly you're asking for the connectors provided by Cloudera, could you please confirm? If yes in the below document you can find the current connectors supported by Cloudera: https://docs.cloudera.com/cdp-private-cloud-base/7.1.5/kafka-connect/topics/kafka-connect-connector.html On the other hand, you can load any connector by following the steps mentioned in the document below: https://docs.cloudera.com/cdp-private-cloud-base/7.1.5/kafka-connect/topics/kafka-connect-connector-install.html Please let us know if that answers your question.
... View more