Member since
01-09-2014
283
Posts
70
Kudos Received
50
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1705 | 06-19-2019 07:50 AM | |
2726 | 05-01-2019 08:07 AM | |
2775 | 04-10-2019 08:49 AM | |
2679 | 03-20-2019 09:30 AM | |
2359 | 01-23-2019 10:58 AM |
08-29-2018
07:17 AM
1 Kudo
You'll have to create the client.properties file, as noted in the "Step 5. Configuring Kafka Clients" here: https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html cat >/root/client.properties<<EOF
security.protocol=SASL_SSL
sasl.kerberos.service.name=kafka
ssl.client.auth=none
ssl.truststore.location=/etc/cdep-ssl-conf/CA_STANDARD/truststore.jks
ssl.truststore.password=cloudera
EOF
cat >/root/jaas.conf<<EOF
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=false
useTicketCache=true
keyTab="/cdep/keytabs/kafka.keytab"
principal="kafka@EXAMPLE.CLOUDERA.COM";
};
EOF
KAFKA_OPTS="-Djava.security.auth.login.config=/root/jaas.conf" kafka-console-producer --broker-list ${HOSTNAME}:9093 --topic test --producer.config /root/client.properties
... View more
07-27-2018
02:58 PM
Try it without the kafka prefix.: tier1.channels.kafka_chan.parseAsFlumeEvent = false http://flume.apache.org/FlumeUserGuide.html#kafka-channel -pd
... View more
06-29-2018
08:39 AM
Kafka just accepts messages (whatever format, plaintext, binary, etc), and makes those available for consumption. Its really up to your producers and consumers how the actual message data is structured. -pd
... View more
06-29-2018
08:38 AM
Its still possible that your hdfs sinks are just not able to keep up with the rabbitmq source. Did you review the graphs to see the rate the sinks are draining vs the rate the source is adding? Also, you are using sinkgroups, which makes deliver single threaded (e.g. one sink at a time). There is really no reason to use sinkgroups, you can remove that and will parallelize the delivery of events through sinks (2x sinks = 2x delivery rate, 4x sinks = 4x delivery rate). -pd
... View more
06-28-2018
03:37 PM
It seems like your sinks may not be draining fast enough....Do you see any sink errors in your logs? If you look at the flume graphs for the event takes per second (by the sinks), vs events accepted (from the source), do you see any patterns? -pd
... View more
05-17-2018
10:15 AM
1 Kudo
The output you see is showing that all three replicas are in sync. The min.insync.replicas governs whether you are able to produce or consume if that number in the ISR is less than min.insync.replicas. Have you tried to shut down one broker and see if you can still produce or consume to the topic? How are you validating when min.insync.replicas threshold is crossed? -pd
... View more
05-07-2018
10:46 AM
This seems to be indicating that the jute.maxbuffer has been exceeded. You can increase this on the command line side by exporting the following: export ZKCLI_JVM_FLAGS=-Djute.maxbuffer=4194304 You may need to confirm on the ZK service configuration, if the jute.maxbuffer size is also 4 MB -pd
... View more
03-30-2018
01:47 PM
There isn't enough information here to determine what the problem could be. If you can provide more log entries and your configuration, that may help. -pd
... View more
03-30-2018
01:45 PM
1 Kudo
There shouldn't be any complications to enabling jumbo frames, as that happens at the network layer and would be transparent to kafka. -pd
... View more
03-16-2018
09:51 AM
Enable DEBUG for the flume service, it should show more about the kafka connection process. Additionally, if you use the consumerGroupCommand for the 'flume' group, do you see that its connected to the Airports topic? consumerGroupCommand --describe --group flume -bootstrap-server quickstart.cloudera:9092 --new-consumer -pd
... View more