Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1998 | 06-15-2020 05:23 AM | |
| 16446 | 01-30-2020 08:04 PM | |
| 2146 | 07-07-2019 09:06 PM | |
| 8341 | 01-27-2018 10:17 PM | |
| 4729 | 12-31-2017 10:12 PM |
10-09-2020
12:56 AM
hi all we have HDP 2.6.4 cluster with 245 workers machines each worker have ( datanode and resource manager ) we want to add 10 new workers machines to the cluster but we want to disable the datanode machines so no data will transfer from the old datanodes to the new datanodes I thinking to do maintenance mode on the new datanode , but not sure if this action is enough in order to disable the datanodes machine on the new workers
... View more
Labels:
- Labels:
-
Ambari Blueprints
09-13-2020
09:17 AM
hi all We are performing now the change hostname configuration on production cluster according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html The last stage is talking about – "in case NameNode HA enabled , then need to run the following command on one of the name node" hdfs zkfc -formatZK -force since we have active name node and standby name node we assume that our namenode is HA enable ? but we want to understand what are the risks when doing the following cli on one of the namenode hdfs zkfc -formatZK -force is the below command is safety to run without risks ?
... View more
Labels:
- Labels:
-
Ambari Blueprints
09-13-2020
09:08 AM
thank you for the post but another question - according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html The last stage is talking about – in case NameNode HA enabled , then need to run the following command on one of the name node hdfs zkfc -formatZK -force thank you for the post but since we have active name node and standby name node we assume that our namenode is HA enable example from our cluster but we want to understand what are the risks when doing the following cli on one of the namenode hdfs zkfc -formatZK -force is the below command is safety to run without risks ?
... View more
09-08-2020
01:42 PM
We have HDP cluster version `2.6.5` and ambari `2.6.1` version
Cluster include 3 masters machines , and 211 data-nodes machines ( workers machines ) , all machines are `rhel 7.2` version
Example
master1.sys77.com , master2.sys77.com , master3.sys77.com …
And data nodes machines as
worker01.sys77.com , worker02.sys77.com ----> worker211.sys77.com
Now we want to change the domain name to `bigdata.com` instead of `sys77.com`
What is the procedure to replace the `domain name` (`sys77.com`) for Hadoop cluster ? ( HDP cluster with ambari )
... View more
Labels:
08-13-2020
02:04 PM
another question - lets say the last snapshot is corrupted , then how zookeeper know to take the good snapshot before the last ?
... View more
08-13-2020
01:59 PM
can you also explain the differences between snapshot to log in zookeeper under Version-2 ?
... View more
08-13-2020
01:57 PM
so if you not recommended on 3 backup ( I feel you recommended more then 3 ) , then what is the count of backup that we can sleep well -:)
... View more
08-13-2020
12:45 PM
ZooKeeper server creates snapshot and log files, but never deletes them. So we need to care about the retention policy. How to decide on the right amount of remaining Zookeeper snapshot files? Need to say that ZooKeeper server itself only needs the latest complete fuzzy snapshot and the log files from the start of that snapshot. But since ZooKeeper creates a backup of snapshot file, how many ZooKeeper snapshot backups do we need to retain? Sometimes snapshots can be corrupted, so the backup of snapshot files should take this into consideration. In our ZooKeeper server we saw that snapshot backup is created each day. Example of snapshot file from my ZooKeeper server: -rw-r--r-- 1 ZooKeeper hadoop 458138861 Aug 10 07:12 snapshot.19000329d1 -rw-r--r-- 1 ZooKeeper hadoop 458138266 Aug 10 07:13 snapshot.19000329de -rw-r--r-- 1 ZooKeeper hadoop 458143590 Aug 10 09:24 snapshot.1900032d7a -rw-r--r-- 1 ZooKeeper hadoop 458142983 Aug 10 09:25 snapshot.1900032d84 -rw-r--r-- 1 ZooKeeper hadoop 458138686 Aug 11 03:42 snapshot.1900034b74 -rw-r--r-- 1 ZooKeeper hadoop 458138686 Aug 12 01:51 snapshot.1900036fa3 -rw-r--r-- 1 ZooKeeper hadoop 458138079 Aug 12 03:03 snapshot.1900037196 -rw-r--r-- 1 ZooKeeper hadoop 458138686 Aug 12 03:08 snapshot.19000371c8 -rw-r--r-- 1 ZooKeeper hadoop 458138432 Aug 12 03:09 snapshot.19000371de -rw-r--r-- 1 ZooKeeper hadoop 458138091 Aug 12 12:02 snapshot.1900038053 -rw-r--r-- 1 ZooKeeper hadoop 458138091 Aug 12 18:04 snapshot.1900038a39 -rw-r--r-- 1 ZooKeeper hadoop 458138091 Aug 13 13:01 snapshot.190003a923 -rw-r--r-- 1 ZooKeeper hadoop 2 Aug 13 13:01 currentEpoch -rw-r--r-- 1 ZooKeeper hadoop 67108880 Aug 13 21:17 log.190002d2ce
... View more
Labels:
- Labels:
-
Apache Kafka
08-06-2020
04:03 AM
hi we have HDP 2.6.5 cluster and HDP 2.6.4 we want to know if this HDP version support - Parameters to configure the Disk Balancer because I see this link - https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/data-storage/content/diskbalancer_configuration_parameters.html seems that only HDP 3.x support disk balances so in simple words can we insert the following parameters to HDFS config in ambari for HDP 2.6.4/2.6.5 ?? dfs.disk.balancer.enabled dfs.disk.balancer.max.disk.throughputInMBperSec dfs.disk.balancer.max.disk.errors dfs.disk.balancer.block.tolerance.percent
... View more
Labels:
- Labels:
-
Apache Ambari
07-22-2020
03:25 PM
we set `retention bytes` value - `104857600` for topic - `topic_test` [root@confluent01 ~]# kafka-topics --zookeeper localhost:2181 --alter --topic topic_test --config retention.bytes=104857600 WARNING: Altering topic configuration from this script has been deprecated and may be removed in future releases. Going forward, please use kafka-configs.sh for this functionality Updated config for topic "topic_test". Now we verify the `retention bytes` from the zookeeper: [root@confluent01 ~]# zookeeper-shell confluent01:2181 get /config/topics/topic_test Connecting to confluent1:2181 {"version":1,"config":{"retention.bytes":"104857600"}} cZxid = 0xb30a00000038 WATCHER:: WatchedEvent state:SyncConnected type:None path:null ctime = Mon Jun 29 11:42:30 GMT 2020 mZxid = 0xb31100008978 mtime = Wed Jul 22 19:22:20 GMT 2020 pZxid = 0xb30a00000038 cversion = 0 dataVersion = 7 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 54 numChildren = 0 now we performed `reboot` to the kafka confluent01 machine after machines started and kafka service start successfully , we checked again the `retention-bytes` from zookeeper: but now ( after machine reboot ) we can see that the `retention bytes` isn't configured in zookeeper [root@confluent01 ~]#zookeeper-shell confluent01:2181 get /config/topics/topic_test Connecting to confluent1:2181 WATCHER:: WatchedEvent state:SyncConnected type:None path:null no retention bytes value {"version":1,"config":{}} cZxid = 0xb30a00000038 ctime = Mon Jun 29 11:42:30 GMT 2020 mZxid = 0xb3110000779b mtime = Wed Jul 22 14:09:19 GMT 2020 pZxid = 0xb30a00000038 cversion = 0 dataVersion = 2 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 25 numChildren = 0 **the question is** - how to remain the `retention bytes` even after restart of kafka machine ? ***NOTE - we not want to use the retention bytes from `server.properties` because we set different retention bytes to each topic***
... View more
Labels:
- Labels:
-
Apache Kafka