Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1944 | 06-15-2020 05:23 AM | |
| 15823 | 01-30-2020 08:04 PM | |
| 2093 | 07-07-2019 09:06 PM | |
| 8177 | 01-27-2018 10:17 PM | |
| 4639 | 12-31-2017 10:12 PM |
07-31-2018
03:24 PM
we have ambari cluster version 2.6.1 with HDP version - 2.6.4 I want to add by API those properties Ambari dasboard.--> YARN
--> Configs-->Advanced-->Customer yarn-site --> Click on "Add
Property" yarn.nodemanager.localizer.cache.target-size-mb
= 10240 yarn.nodemanager.localizer.cache.cleanup.interval-ms
= 300000 so both parameters will be in Customer yarn-site please advice how to do this task with REST API?
... View more
Labels:
07-31-2018
02:22 PM
@Jordan Moore what are the risks if we still use only 3 zookeeper servers with 17 kafka machines ?
... View more
07-25-2018
07:37 PM
so just to summary what you said do you mean that we need min 5 zookeeper server for 17 kafka machines ? or in other words how many zookeeper you suggest for the following cluster nodes: master machines - 3 kafka machines - 17 worker machines - 160
... View more
07-25-2018
04:35 PM
we hear that kafka should be an odd number to avoid split-brain scenarios! but can we get more info about this ? why kafka should be odd number? we want to create the following ambari cluster based on HDP version 2.6.5 master machines - 3 kafka machines - 17 worker machines - 160
... View more
Labels:
07-15-2018
03:34 PM
@Geoffrey , any suggestion how to ciontinue from this point ?
... View more
07-15-2018
12:20 PM
@Geoffrey yes I already removed it , and the error is exactly the same error as we already seen ,
... View more
07-15-2018
12:06 PM
@Jeoffrey unfortunately after setting the variables we still have the issue, we restart the HDFS service and also the worker machine but we still see the - "Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try"
... View more
07-15-2018
09:56 AM
should I set this variable to true ? ( dfs.client.block.write.replace-datanode-on-failure.policy=true ) , in Custom hdfs-site
... View more
07-15-2018
09:42 AM
I removed this variable from the Custom hdfs-site,
... View more
07-15-2018
08:21 AM
@Geoffrey I set the following in Custom hdfs-site , per your recommendation , and we restart the HDFS service dfs.client.block.write.replace-datanode-on-failure.best-effort=true dfs.client.block.write.replace-datanode-on-failure.enable=false but still the errors still defined from the log ( on the first Datanode machine - worker01 ) ---2018-07-15T08:14:49.049 ERROR [driver][][] [org.apache.spark.scheduler.LiveListenerBus] Listener EventLoggingListener threw an exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage:50010,DS-f5c5260a-20b1-43f4-b8fd-53e88db2e48e,DISK], DatanodeInfoWithStorage[:50010,DS-b4758979-52a2-4238-99f0-1b5ec45a7e25,DISK]], original=[DatanodeInfoWithStorage[:50010,DS-f5c5260a-20b1-43f4-b8fd-53e88db2e48e,DISK], DatanodeInfoWithStorage[:50010,DS-b4758979-52a2-4238-99f0-1b5ec45a7e25,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1059)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1122)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1280)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1005)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:512)
... View more