Member since
03-07-2019
24
Posts
14
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
36795 | 01-26-2018 07:42 PM |
01-24-2021
01:41 AM
I was just able to confirm that the update command listed is Postgresql database flavor.
... View more
05-01-2019
08:25 PM
Hi Vedant! You state: num.io.threads should be greater than the number of disks dedicated for Kafka. I strongly recommend to start with same number of disks first. Is num.io.threads to be calculated as the number of disks per node allocated to Kafka or the total number of disk for Kafka for the entire cluster? I'm guessing disks per node dedicated for Kafka, but I wanted to confirm. Thanks, Jeff G.
... View more
04-24-2019
01:13 PM
solomonchinni has two typographical errors in their response: ntrap should be notrap netmask should be mask The resulting corrected line should be: restrict <ipaddress> mask 255.255.255.0 nomodify notrap The <ipaddress> should be the start of the IP address range that you want to be defined by the provided mask. For example, if your IP address range in 192.168.1.1 through 192.168.1.255, you'll want: restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
... View more
10-18-2018
06:45 PM
Hi Mark: If you continue to see these error messages even after all Kafka brokers have started up successfully, then there may be some kind of connectivity issue between between the broker nodes. If these messages do not continue after all Kafka brokers have started, you can safely ignore these messages. Thank you, Jeff Groves
... View more
08-17-2018
07:57 PM
The option to use a file that contains the node FQDNs does not seem to work. The command thinks that the filename is a node FQDN. Also, what is the format of the clusternodes1.txt? One host per line or all hosts comma separated on one line?
... View more
04-13-2018
06:31 PM
Hi @Prosenjit Pramanick, Are you receiving this error message when using the console producer? If so, what command line are you using to run the console producer? If not, please try using the console producer to connect to Kafka to confirm that your brokers are working as expected: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-installation/content/validate_kafka.html for more details on validating your Kafka cluster. Thank you, Jeff Groves
... View more
01-26-2018
07:42 PM
2 Kudos
Hi Anurag: The files under /kafka-logs are the actual data files used by Kafka. They aren't the application logs for the Kafka brokers. The files under /var/log/kafka are the application logs for the brokers. Thank you, Jeff Groves
... View more
12-14-2017
09:04 PM
1 Kudo
Hi @Mark Lee:
Have you attempted to call the comsumer and producer with the following parameter appended to the end of the command line:
--security-protocol SASL_PLAINTEXT
As an example, your producer command line would look something like this: bin/kafka-console-producer.sh --broker-list localhost:6667 --topic apple3 --producer.config=/tmp/kafka/producer.properties --security-protocol SASL_PLAINTEXT
... View more
08-17-2017
07:41 PM
Hi @Aditya Gopal: It looks like the version of python on this node has been changed to one where the method called is not available. Has there been a change or upgrade to the version of python installed on this particular node recently? Thank you, Jeff G.
... View more
07-13-2017
02:58 PM
1 Kudo
When a Kafka cluster is over-subscribed, the loss of a single broker can be a jarring experience for the cluster as a whole. This is especially true when trying to bring a previously failed broker back into a cluster. In order to help mitigate some of the impact of returning a broker to a cluster when that broker has been out of the cluster for a number of days, removing the broker ID of the broker ready to re-enter the cluster from the Replicas list of all partitions can help. Generally, you want a Kafka cluster that is sized properly in order to handle single node failures, but as is often the case the size of the use case on the Kafka cluster can quickly start to exceed the physical limitations. In those situations when you're waiting for new hardware to arrive to augment your cluster, you still need to keep the existing cluster working as well as possible. To that end, there are some AWK scripts that are available on Github that help create the JSON files needed to essentially spoon feed partitions back on to a broker. This collection of script, which are playfully called Kawkfa, are still alpha at best and have their bugs, but someone may find them useful in the above situation. The high level procedure is as follows: For each partition entry that includes the broker.id of the failed node, remove that broker ID from the Replicas list Bring the wayward broker back into the cluster Add back the wayward broker ID to the Replicas list, but do so without making it the preferred replica Once the broker had been added back to its partitions, then make the broker the preferred replica for a random number of the partitions Caveats about the scripts: You are using the scripts at your own risk. Just be careful and understand what the scripts are doing prior to use There are bugs in the script -- most notable is that it adds an extra comma at the end of the last partition entry that should not be there. Simply removing that comma will allow the JSON file to be properly read Have fun!
... View more
Labels: