Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2059 | 06-15-2020 05:23 AM | |
| 17057 | 01-30-2020 08:04 PM | |
| 2220 | 07-07-2019 09:06 PM | |
| 8557 | 01-27-2018 10:17 PM | |
| 4838 | 12-31-2017 10:12 PM |
10-30-2019
12:27 PM
just want to say first thank you for all explain but for now we cant work with Kubernetes ( because some internal reasons ) so the option is to work with docker based on that - do you think kafka cluster using docker will have less performance then kafka cluster without docker ?
... View more
10-29-2019
06:49 AM
We need to build a Kafka production cluster with 3-5 nodes in cluster ,
We have the following options:
Kafka in Docker containers (Kafka cluster include zookeeper and schema registry on each node)
Kafka cluster not using docker (Kafka cluster include zookeeper and schema registry on each node)
Since we are talking on production cluster we need good performance as we have high read/write to disks (disk size is 10T), good IO performance, etc.
So does Kafka using Docker meet the requirements for productions clusters?
more info - https://www.infoq.com/articles/apache-kafka-best-practices-to-optimize-your-deployment
... View more
Labels:
- Labels:
-
Apache Kafka
-
Docker
10-27-2019
04:02 AM
may I return to my first question until using redhat 7.2 , every thing was ok , after each scratch installation we never seen that but when we jump to redhat 7.5 then every cluster that created was with corrupted files - any HINT - why ?
... View more
10-27-2019
02:28 AM
about the corrupted file why just not use the following? hdfs fsck / -delete
... View more
10-26-2019
11:27 PM
Dear Shelton this are the results that we get from hdfs fsck / -storagepolicies FSCK started by hdfs (auth:SIMPLE) from /192.9.200.217 for path / at Sun Oct 27 05:49:31 UTC 2019 .................. /hdp/apps/2.6.4.0-91/hive/hive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741831 /hdp/apps/2.6.4.0-91/hive/hive.tar.gz: MISSING 1 blocks of total size 106475099 B.. /hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741834 /hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: MISSING 1 blocks of total size 105758 B.. /hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741825 /hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741826 /hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: MISSING 2 blocks of total size 212360343 B.. /hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741829 /hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741830 /hdp/apps/2.6.4.0-91/pig/pig.tar.gz: MISSING 2 blocks of total size 135018554 B.. /hdp/apps/2.6.4.0-91/slider/slider.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741828 /hdp/apps/2.6.4.0-91/slider/slider.tar.gz: MISSING 1 blocks of total size 47696340 B.. /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741832 /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741833 /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: MISSING 2 blocks of total size 189992674 B.. /hdp/apps/2.6.4.0-91/tez/tez.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741827 /hdp/apps/2.6.4.0-91/tez/tez.tar.gz: MISSING 1 blocks of total size 53236968 B...... /user/ambari-qa/.staging/job_1571958926657_0001/job.jar: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741864_1131. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s). . /user/ambari-qa/.staging/job_1571958926657_0001/job.split: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741865_1132. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s). ...Status: CORRUPT yes we check the replication factor - yes its 3 based on that results , can we just delete the corrupted blocks?
... View more
10-26-2019
12:33 PM
We installed new ambari cluster with the following details ( we moved to redhat 7.5 instead 7.2 )
Redhat – 7.5 HDP version – 2.6.4 Ambari – 2.6.2
After we complete the installation , we notice about very strange behavior ( please note that this is new cluster )
On HDFS status summary, I see the following messages about under-replicated blocks
We see under replicated blocks is 12 ( while its should be 0 on new installation )
Any suggestion – why this ?
I just want to say that this behavior not appears on redhat 7.2
... View more
Labels:
09-23-2019
06:54 AM
Just note for HDP version - 3.1 , the spark version is - Apache Spark 2.3.2 and not 2.4
... View more
09-17-2019
09:08 AM
Dear Cjervis still not get an answer about HDP latest is 3.1 , what we asked is about the next versions? or this should be the end of HDP version
... View more
09-16-2019
12:08 PM
Dear friends and colleges
we little worry about the future of HDP versions
for now the latest HDP is 3.1
but what next ?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
09-11-2019
10:40 AM
we have kafka cluster with 3 brokers machines and 3 zookeeper servers machines
all servers are installed on redhat 7.2 version
but when we run the following cli ( to know that all brokers ids are exist in zookeeper , we get:
zookeeper-shell.sh zoo_server:2181 <<< "ls /brokers/ids"
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[3, 2] instead to get that: [3, 2 , 1]
we checked the first broker ( kafka01 ) by searching errors in server.log
And we not see any related error in the log!
port 2181 from kafka broker to zookeeper machine is working
we also restart kafka01 , but this not help to get the broker id in zookeeper cli
we try also to restart all zookeeper servers ( there are 3 ) , and then again to restart kafka01 , but still without results
so any suggestion to this behavior?
can we add the missing broker to zookeeper cli ? , if yes then how?
note - I see another thread - https://community.cloudera.com/t5/Support-Questions/Specified-config-does-not-exist-in-ZooKeeper/td-p/1875
but no info about how to add id to zookeeper
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Zookeeper