Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1928 | 06-15-2020 05:23 AM | |
| 15531 | 01-30-2020 08:04 PM | |
| 2080 | 07-07-2019 09:06 PM | |
| 8130 | 01-27-2018 10:17 PM | |
| 4588 | 12-31-2017 10:12 PM |
10-26-2019
11:27 PM
Dear Shelton this are the results that we get from hdfs fsck / -storagepolicies FSCK started by hdfs (auth:SIMPLE) from /192.9.200.217 for path / at Sun Oct 27 05:49:31 UTC 2019 .................. /hdp/apps/2.6.4.0-91/hive/hive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741831 /hdp/apps/2.6.4.0-91/hive/hive.tar.gz: MISSING 1 blocks of total size 106475099 B.. /hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741834 /hdp/apps/2.6.4.0-91/mapreduce/hadoop-streaming.jar: MISSING 1 blocks of total size 105758 B.. /hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741825 /hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741826 /hdp/apps/2.6.4.0-91/mapreduce/mapreduce.tar.gz: MISSING 2 blocks of total size 212360343 B.. /hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741829 /hdp/apps/2.6.4.0-91/pig/pig.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741830 /hdp/apps/2.6.4.0-91/pig/pig.tar.gz: MISSING 2 blocks of total size 135018554 B.. /hdp/apps/2.6.4.0-91/slider/slider.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741828 /hdp/apps/2.6.4.0-91/slider/slider.tar.gz: MISSING 1 blocks of total size 47696340 B.. /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741832 /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741833 /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz: MISSING 2 blocks of total size 189992674 B.. /hdp/apps/2.6.4.0-91/tez/tez.tar.gz: CORRUPT blockpool BP-2095386762-192.9.201.8-1571956239762 block blk_1073741827 /hdp/apps/2.6.4.0-91/tez/tez.tar.gz: MISSING 1 blocks of total size 53236968 B...... /user/ambari-qa/.staging/job_1571958926657_0001/job.jar: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741864_1131. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s). . /user/ambari-qa/.staging/job_1571958926657_0001/job.split: Under replicated BP-2095386762-192.9.201.8-1571956239762:blk_1073741865_1132. Target Replicas is 10 but found 5 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s). ...Status: CORRUPT yes we check the replication factor - yes its 3 based on that results , can we just delete the corrupted blocks?
... View more
10-26-2019
12:33 PM
We installed new ambari cluster with the following details ( we moved to redhat 7.5 instead 7.2 )
Redhat – 7.5 HDP version – 2.6.4 Ambari – 2.6.2
After we complete the installation , we notice about very strange behavior ( please note that this is new cluster )
On HDFS status summary, I see the following messages about under-replicated blocks
We see under replicated blocks is 12 ( while its should be 0 on new installation )
Any suggestion – why this ?
I just want to say that this behavior not appears on redhat 7.2
... View more
Labels:
09-23-2019
06:54 AM
Just note for HDP version - 3.1 , the spark version is - Apache Spark 2.3.2 and not 2.4
... View more
09-17-2019
09:08 AM
Dear Cjervis still not get an answer about HDP latest is 3.1 , what we asked is about the next versions? or this should be the end of HDP version
... View more
09-16-2019
12:08 PM
Dear friends and colleges
we little worry about the future of HDP versions
for now the latest HDP is 3.1
but what next ?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
09-11-2019
10:40 AM
we have kafka cluster with 3 brokers machines and 3 zookeeper servers machines
all servers are installed on redhat 7.2 version
but when we run the following cli ( to know that all brokers ids are exist in zookeeper , we get:
zookeeper-shell.sh zoo_server:2181 <<< "ls /brokers/ids"
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[3, 2] instead to get that: [3, 2 , 1]
we checked the first broker ( kafka01 ) by searching errors in server.log
And we not see any related error in the log!
port 2181 from kafka broker to zookeeper machine is working
we also restart kafka01 , but this not help to get the broker id in zookeeper cli
we try also to restart all zookeeper servers ( there are 3 ) , and then again to restart kafka01 , but still without results
so any suggestion to this behavior?
can we add the missing broker to zookeeper cli ? , if yes then how?
note - I see another thread - https://community.cloudera.com/t5/Support-Questions/Specified-config-does-not-exist-in-ZooKeeper/td-p/1875
but no info about how to add id to zookeeper
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Zookeeper
07-27-2019
07:19 PM
@Jay as you know we are using script that use API to delete the service , so if no need to restart the ambari server , so I am understand that only deletion by API is that all needed and no need other additional steps - correct me i am I wrong
... View more
07-27-2019
07:16 PM
@David Sanchez we are using script in order to add service or to delete the service , we not do it from amabri UI because we do all services management by automatic script that do it
... View more
07-25-2019
09:22 AM
@David Sanchez other thing please - can you help me with the post - https://community.hortonworks.com/questions/249557/is-it-necessary-to-restart-the-ambari-server-after.html
... View more
07-24-2019
09:48 PM
hi all we have production cluster with HDP - 2.6.4 version we have 186 data-node machines ( DELL MACHINES WITH 10 disks ) we try to re balance the data on the disks so disks will be with the same used size but without success we feel that 2.6.4 version not have the tools that support re balance!!! as I mentioned on each data-node machine we have 10 disks while each disk is 1.8T and some of the disks are 55% used and some of them are only 1% used so we have non balanced disks ( its like some disk are not useful ) , but why HDFS not balanced the data on all disks?? my question - from which HDP version , we can re balance the data-node disks ? dose 2.6.5 version support re balance ? or from 3.X ? please advice , what we can do ? as I mentioned this is very huge cluster and we get the bad feeling that the current HDP version ( 2.6.4 ) not support any re balance - is it true? example /dev/sdc 3842878616 357409860 3485452372 10% /data_hdfs/sdc
/dev/sde 3842878616 460433776 3382428456 42% /data_hdfs/sde
/dev/sdi 3842878616 8606628 34255604 1% /data_hdfs/sdi
/dev/sdg 3842878616 256937520 85924712 7% /data_hdfs/sdg
/dev/sdd 3842878616 465520852 3377341380 53% /data_hdfs/sdd
/dev/sdh 3842878616 90136 42772096 1% /data_hdfs/sdh
/dev/sdb 3842878616 466423860 3376438372 53% /data_hdfs/sdb
... View more
Labels: