Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2001 | 06-15-2020 05:23 AM | |
| 16487 | 01-30-2020 08:04 PM | |
| 2152 | 07-07-2019 09:06 PM | |
| 8363 | 01-27-2018 10:17 PM | |
| 4740 | 12-31-2017 10:12 PM |
08-15-2018
11:31 AM
Hi @Michael Bronson , Yeah exactly thats what i did.
... View more
08-15-2018
04:28 PM
@Michael Bronson Can you please mark correct answer if you are satisfied with my answer?
... View more
08-14-2018
06:01 PM
@Michael Bronson Yes, as of now Ambari 2.7 is the latest version and it is certified to use with HDP 3.0 If it answer my question then please mark this as correct answer.
... View more
08-13-2018
10:36 PM
@Michael Bronson For Kafka its not even enabled GC logging I guess - hence you can ignore that part. but you can set that like below. In kafka-env section, From export KAFKA_KERBEROS_PARAMS="-Djavax.security.auth.useSubjectCredsOnly=false to: export KAFKA_KERBEROS_PARAMS="-Djavax.security.auth.useSubjectCredsOnly=false -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=2M If you satisfied with answers then please mark my comment as correct answer. this will help others as well.
... View more
08-10-2018
05:47 AM
@Jay , thank you so much we do the testing and they are fine , now we need to do the steps VIA API
... View more
08-07-2018
11:06 PM
@Michael Bronson The latest version of HDP is currently HDP 3.0, Which is already released. If you want to know the New Features added as part of HDP 3.0 then you can refer to the following docs: New Features HDP 3.0: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/release-notes/content/new_features.html Behavior Changes in HDP 3.0: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/release-notes/content/behavior_changes.html Overview of all changes/fixes/known issues of HDP 3.0 can be found here: Release Notes HDP 3.0: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/release-notes/content/relnotes.html . Similarly if you wanted to know the behavior changes/Fixes that happened in HDP 2.6,5 then this information can be found at the Release Notes page of the HDP 2.6.5 doc. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/ch_relnotes.html
... View more
08-01-2018
06:29 PM
Hi @Michael Bronson, sorry i missed the comment as you didnt tag me by name. you can follow the same method with --action=set . you can refer to this Good document by apache : https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations#Modifyconfigurations-Editconfigurationusingconfigs.py Please Mark answer as resolved if this helped you 🙂
... View more
12-04-2018
07:16 PM
1 Kudo
@Michael Bronson ZooKeeper needs an odd number of hosts so it can build a quorum. A 3 node cluster can survive the loss of 1 node. It will fail if there is a simultaneous loss of 2 nodes (for example a node fails during an upgrade). If zookeeper goes down the brokers will not operate. Designing a ZooKeeper deployment explains: "For the ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines. Thus, a deployment that consists of three machines can handle one failure, and a deployment of five machines can handle two failures. Note that a deployment of six machines can only handle two failures since three machines is not a majority. For this reason, ZooKeeper deployments are usually made up of an odd number of machines."
... View more
07-15-2018
03:34 PM
@Geoffrey , any suggestion how to ciontinue from this point ?
... View more
07-13-2018
03:38 PM
1 Kudo
@Michael Bronson the practice of using 2x memory for swap space is very old and out of date. It was usefull on a time when systems had as an example 256MB of ram and does not apply as of today. Using a swap space in hadoop nodes, worker or masters is not recommended because it will not prevent you from having issues even when the swap space memory is being used due to RAM hitting the threshold defined with the swapiness parameters. In the refered post: "If you have the need to use more memory, or expect to need more, than the amount of RAM which has been purchased. And can accept severe degradation in failure. In this case you would need a lot of swap configured. Your better off buying the right amount of memory." "The fear with disabling swap on masters is that an OOM (out of memory) event could affect cluster availability. But that will still happen even with swap configured, it just will take slightly longer. Good administrator/operator practices would be to monitor RAM availability, then fix any issues before running out of memory" If you really have a requirement to have it configured on your master nodes, then just set the swap as you like, example a 1/4 of total system memory and set the swappiness value to 0.
... View more