Member since
03-07-2019
24
Posts
14
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
37135 | 01-26-2018 07:42 PM |
01-24-2021
01:41 AM
I was just able to confirm that the update command listed is Postgresql database flavor.
... View more
01-28-2020
10:10 AM
Hi, There are two ip addresses after running ntpq -np, remote and refid, do you mean we should use remote ip? Thank you for sharing your knowledge!
... View more
12-20-2018
06:10 PM
The Abouve Article doesnt work in ambari-2.7.3 due to a bug . Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 119, in <module>
RemovePreviousStacks().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 49, in actionexecute
self.remove_stack_version(structured_output, low_version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 54, in remove_stack_version
packages_to_remove = self.get_packages_to_remove(version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 77, in get_packages_to_remove
all_installed_packages = self.pkg_provider.all_installed_packages()
AttributeError: 'YumManager' object has no attribute 'all_installed_packages' Please refer to article if you face the same bug : https://community.hortonworks.com/articles/230893/remove-old-stack-versions-script-doesnt-work-in-am.html
... View more
01-26-2018
07:42 PM
2 Kudos
Hi Anurag: The files under /kafka-logs are the actual data files used by Kafka. They aren't the application logs for the Kafka brokers. The files under /var/log/kafka are the application logs for the brokers. Thank you, Jeff Groves
... View more
12-14-2017
09:04 PM
1 Kudo
Hi @Mark Lee:
Have you attempted to call the comsumer and producer with the following parameter appended to the end of the command line:
--security-protocol SASL_PLAINTEXT
As an example, your producer command line would look something like this: bin/kafka-console-producer.sh --broker-list localhost:6667 --topic apple3 --producer.config=/tmp/kafka/producer.properties --security-protocol SASL_PLAINTEXT
... View more
07-13-2017
02:58 PM
1 Kudo
When a Kafka cluster is over-subscribed, the loss of a single broker can be a jarring experience for the cluster as a whole. This is especially true when trying to bring a previously failed broker back into a cluster. In order to help mitigate some of the impact of returning a broker to a cluster when that broker has been out of the cluster for a number of days, removing the broker ID of the broker ready to re-enter the cluster from the Replicas list of all partitions can help. Generally, you want a Kafka cluster that is sized properly in order to handle single node failures, but as is often the case the size of the use case on the Kafka cluster can quickly start to exceed the physical limitations. In those situations when you're waiting for new hardware to arrive to augment your cluster, you still need to keep the existing cluster working as well as possible. To that end, there are some AWK scripts that are available on Github that help create the JSON files needed to essentially spoon feed partitions back on to a broker. This collection of script, which are playfully called Kawkfa, are still alpha at best and have their bugs, but someone may find them useful in the above situation. The high level procedure is as follows: For each partition entry that includes the broker.id of the failed node, remove that broker ID from the Replicas list Bring the wayward broker back into the cluster Add back the wayward broker ID to the Replicas list, but do so without making it the preferred replica Once the broker had been added back to its partitions, then make the broker the preferred replica for a random number of the partitions Caveats about the scripts: You are using the scripts at your own risk. Just be careful and understand what the scripts are doing prior to use There are bugs in the script -- most notable is that it adds an extra comma at the end of the last partition entry that should not be there. Simply removing that comma will allow the JSON file to be properly read Have fun!
... View more
Labels:
02-22-2018
08:28 PM
@lgeorge @justin kuspa @Rick Moritz Any further updates on why the R interpreter was removed in 2016? Will functionality differ from RStudio in terms of running R Code through the Livy interpreter in Zeppelin?
... View more
05-01-2019
08:25 PM
Hi Vedant! You state: num.io.threads should be greater than the number of disks dedicated for Kafka. I strongly recommend to start with same number of disks first. Is num.io.threads to be calculated as the number of disks per node allocated to Kafka or the total number of disk for Kafka for the entire cluster? I'm guessing disks per node dedicated for Kafka, but I wanted to confirm. Thanks, Jeff G.
... View more
04-20-2017
08:57 PM
Note that the <strong> and </strong> strings in the code block above should be removed, since they are HTML formatting commands that somehow became visible in the formatted text of the code block.
... View more
04-26-2016
10:53 AM
Iam facing the same issue in my production which is having Ambari 2.1.2. I have a question if it is python kerbores issue. why we are editing OOZIE python file. Can you explain please
... View more