Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1911 | 06-15-2020 05:23 AM | |
| 15416 | 01-30-2020 08:04 PM | |
| 2047 | 07-07-2019 09:06 PM | |
| 8091 | 01-27-2018 10:17 PM | |
| 4555 | 12-31-2017 10:12 PM |
08-13-2020
02:04 PM
another question - lets say the last snapshot is corrupted , then how zookeeper know to take the good snapshot before the last ?
... View more
08-13-2020
01:59 PM
can you also explain the differences between snapshot to log in zookeeper under Version-2 ?
... View more
08-13-2020
01:57 PM
so if you not recommended on 3 backup ( I feel you recommended more then 3 ) , then what is the count of backup that we can sleep well -:)
... View more
08-13-2020
12:45 PM
ZooKeeper server creates snapshot and log files, but never deletes them. So we need to care about the retention policy. How to decide on the right amount of remaining Zookeeper snapshot files? Need to say that ZooKeeper server itself only needs the latest complete fuzzy snapshot and the log files from the start of that snapshot. But since ZooKeeper creates a backup of snapshot file, how many ZooKeeper snapshot backups do we need to retain? Sometimes snapshots can be corrupted, so the backup of snapshot files should take this into consideration. In our ZooKeeper server we saw that snapshot backup is created each day. Example of snapshot file from my ZooKeeper server: -rw-r--r-- 1 ZooKeeper hadoop 458138861 Aug 10 07:12 snapshot.19000329d1 -rw-r--r-- 1 ZooKeeper hadoop 458138266 Aug 10 07:13 snapshot.19000329de -rw-r--r-- 1 ZooKeeper hadoop 458143590 Aug 10 09:24 snapshot.1900032d7a -rw-r--r-- 1 ZooKeeper hadoop 458142983 Aug 10 09:25 snapshot.1900032d84 -rw-r--r-- 1 ZooKeeper hadoop 458138686 Aug 11 03:42 snapshot.1900034b74 -rw-r--r-- 1 ZooKeeper hadoop 458138686 Aug 12 01:51 snapshot.1900036fa3 -rw-r--r-- 1 ZooKeeper hadoop 458138079 Aug 12 03:03 snapshot.1900037196 -rw-r--r-- 1 ZooKeeper hadoop 458138686 Aug 12 03:08 snapshot.19000371c8 -rw-r--r-- 1 ZooKeeper hadoop 458138432 Aug 12 03:09 snapshot.19000371de -rw-r--r-- 1 ZooKeeper hadoop 458138091 Aug 12 12:02 snapshot.1900038053 -rw-r--r-- 1 ZooKeeper hadoop 458138091 Aug 12 18:04 snapshot.1900038a39 -rw-r--r-- 1 ZooKeeper hadoop 458138091 Aug 13 13:01 snapshot.190003a923 -rw-r--r-- 1 ZooKeeper hadoop 2 Aug 13 13:01 currentEpoch -rw-r--r-- 1 ZooKeeper hadoop 67108880 Aug 13 21:17 log.190002d2ce
... View more
Labels:
- Labels:
-
Apache Kafka
08-06-2020
04:03 AM
hi we have HDP 2.6.5 cluster and HDP 2.6.4 we want to know if this HDP version support - Parameters to configure the Disk Balancer because I see this link - https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/data-storage/content/diskbalancer_configuration_parameters.html seems that only HDP 3.x support disk balances so in simple words can we insert the following parameters to HDFS config in ambari for HDP 2.6.4/2.6.5 ?? dfs.disk.balancer.enabled dfs.disk.balancer.max.disk.throughputInMBperSec dfs.disk.balancer.max.disk.errors dfs.disk.balancer.block.tolerance.percent
... View more
Labels:
- Labels:
-
Apache Ambari
07-22-2020
03:25 PM
we set `retention bytes` value - `104857600` for topic - `topic_test` [root@confluent01 ~]# kafka-topics --zookeeper localhost:2181 --alter --topic topic_test --config retention.bytes=104857600 WARNING: Altering topic configuration from this script has been deprecated and may be removed in future releases. Going forward, please use kafka-configs.sh for this functionality Updated config for topic "topic_test". Now we verify the `retention bytes` from the zookeeper: [root@confluent01 ~]# zookeeper-shell confluent01:2181 get /config/topics/topic_test Connecting to confluent1:2181 {"version":1,"config":{"retention.bytes":"104857600"}} cZxid = 0xb30a00000038 WATCHER:: WatchedEvent state:SyncConnected type:None path:null ctime = Mon Jun 29 11:42:30 GMT 2020 mZxid = 0xb31100008978 mtime = Wed Jul 22 19:22:20 GMT 2020 pZxid = 0xb30a00000038 cversion = 0 dataVersion = 7 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 54 numChildren = 0 now we performed `reboot` to the kafka confluent01 machine after machines started and kafka service start successfully , we checked again the `retention-bytes` from zookeeper: but now ( after machine reboot ) we can see that the `retention bytes` isn't configured in zookeeper [root@confluent01 ~]#zookeeper-shell confluent01:2181 get /config/topics/topic_test Connecting to confluent1:2181 WATCHER:: WatchedEvent state:SyncConnected type:None path:null no retention bytes value {"version":1,"config":{}} cZxid = 0xb30a00000038 ctime = Mon Jun 29 11:42:30 GMT 2020 mZxid = 0xb3110000779b mtime = Wed Jul 22 14:09:19 GMT 2020 pZxid = 0xb30a00000038 cversion = 0 dataVersion = 2 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 25 numChildren = 0 **the question is** - how to remain the `retention bytes` even after restart of kafka machine ? ***NOTE - we not want to use the retention bytes from `server.properties` because we set different retention bytes to each topic***
... View more
Labels:
- Labels:
-
Apache Kafka
07-18-2020
02:16 PM
hi what you mean about "The package installation is almost deprecated" is the latest version will be end of life sometime?
... View more
07-13-2020
03:16 AM
1 Kudo
from ambari we can capture all version , by click on `Admin` button and the click on `stack and version` finally click on `Versions` , then we get the following details we want to know how to capture all these version by using ambari rest api? we try curl -u admin:admin -H 'X-Requested-By:admin' 'http://localhost:8080/api/v1/clusters/HDP/configuratons/service_config_versions' but its not return any info
... View more
Labels:
- Labels:
-
Ambari Blueprints
07-07-2020
09:57 AM
I have a remote server and servers authenticated Hadoop environment. I want to copy file from Remote server to Hadoop machine to HDFS Please advise efficient approach/HDFS command to copy files from remote server to HDFS. Any example will be helpful. as ordinary way to copy file from remote server to server itself is scp -rp file remote_server:/tmp but this approach not support copy directly to hdfs
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-06-2020
04:53 AM
we have Hadoop cluster with only 2 data nodes machines in HDFS configuration we defined the Block replication to 3 so Block replication=3 is it OK? to defined Block replication=3 , when we have only two data nodes in the cluster? from my understanding when we defined Block replication=3 while we have 2 data nodes machines in HDFS cluster its means that one machine should have 2 replica . and the other machine one replica , am I correct here?
... View more
Labels:
- Labels:
-
Ambari Blueprints