Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2083 | 06-15-2020 05:23 AM | |
| 17306 | 01-30-2020 08:04 PM | |
| 2246 | 07-07-2019 09:06 PM | |
| 8673 | 01-27-2018 10:17 PM | |
| 4900 | 12-31-2017 10:12 PM |
12-03-2018
10:21 PM
I think also when we have only 3 zoo servers and one fail , because the split brain issue then the other two zookepers can fail , do you agree ?
... View more
12-03-2018
09:49 PM
also see jordan answer - https://community.hortonworks.com/questions/207947/why-kafka-should-be-en-even-number.html ( his last answer - he say that it will not be good idea to set only 3 zookepers )
... View more
12-03-2018
09:40 PM
I have another question - in case we have 20 kafka machines , and we have only 3 zoo servers , is it still good to installed the zookeepers on VM machine ? , or we need fysical machine
... View more
12-03-2018
07:15 PM
hi all we have ambari cluster with the following details ( HDP version - 2.6.4 ) 128 datanode machines
3 kafka machines
3 zookeeper server
3 master machines we want to add 17 kafka machines to the cluster so what need to consider when adding new 17 kafka machines to the cluster? is it possible to stay with 3 zookeeper server while adding 17 kafka's machines?
... View more
Labels:
12-03-2018
05:16 PM
hi all, from the datanodes machines ( we have 8 datanodes machine in the ambari cluster ) we can see the following errors we check the DNS of the hostname's and the resolving of the IP's and they are ok any suggestion what else need to check here ? 2018-12-02 16:41:59,608 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.23.12.179:39418 dst: /192.23.12.179:50010
2018-12-02 16:41:59,609 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.23.12.179:39664 dst: /192.23.12.179:50010
2018-12-02 16:42:24,018 ERROR datanode.DataNode (DataXceiver.java:writeBlock(787)) - DataNode{data=FSDataset{dirpath='[/grid/sdb/hadoop/hdfs/data/current, /grid/sdc/hadoop/hdfs/data/current, /grid/sdd/hadoop/hdfs/data/current, /grid/sde/hadoop/hdfs/data/current, /grid/sdf/hadoop/hdfs/data/current]'}, localName='DATANODE01.sys54.com:50010', datanodeUuid='83024a74-8fa4-4cc4-ad09-82c5b065f8ad', xmitsInProgress=0}:Exception transfering block BP-1378391652-192.23.12.165-1531291408940:blk_1203178897_129440081 to mirror 192.23.12.181:50010: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.23.12.179:35834 remote=/192.23.12.181:50010]
2018-12-02 16:42:24,018 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.23.12.180:34342 dst: /192.23.12.179:50010
2018-12-02 16:42:30,637 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.23.12.179:38120 dst: /192.23.12.179:50010
... View more
Labels:
11-18-2018
11:27 PM
Jay another example . you can see that BAR ended on - 32 so can we create some API that will give this value ( 32 ) ?
... View more
11-18-2018
11:23 PM
@Jay as you know some parameters in ambari have scroll bar ( with max bar value ) , what we want is to know the MAX of the BAR by REST API
... View more
11-18-2018
11:22 PM
@Jay , first thank you , actually what we want is the value of the MAX BAR ( as displayed 15.508 ) , is it posible to get this value by REST API , ?
... View more
11-18-2018
11:06 PM
@Jay chould you please help me with the follwing thred - https://community.hortonworks.com/questions/226622/rest-api-how-to-verify-the-max-value-for-each-para.html
... View more
11-18-2018
07:07 PM
hi all how to get the number of CPU on each data node by REST API? I also tried this but get error about: [root@master02 ~]# curl -u admin:admin -sS -G "https://master02:8080/api/v1/clusters/HDP/services/HDFS/components/DATANODE"
curl: (35) SSL received a record that exceeded the maximum permissible length.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop