Member since
12-12-2015
27
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3445 | 12-12-2016 05:53 PM | |
1826 | 10-21-2016 08:53 AM |
02-11-2019
08:51 AM
Hello, To fix this issue, I used to stop/start the Flume process on a daily basis. Recently we have migrated from Flume to NiFi; much more stable. Rgds Laurent
... View more
08-02-2017
02:39 PM
Hello @rbiswas, Sorry for the delay in getting back to you (I was on holidays). Thanks for your answers. Yes we can close the thread. regards Laurent
... View more
07-20-2017
08:37 AM
Hello @rbiswas, Sorry, I'm a bit confused by your last statement. Could you please confirm that if I define a replication factor of 4, and 4 racks, I will get the following distribution of replicas ? (see diagram below) regards Laurent
... View more
07-17-2017
08:19 AM
thanks @rbiswas for your answer. My concern is regarding the speed of the replication if, let's say one rack is unavailable during 24 / 48hours for maintenance reasons, and in the meantime HDFS is trying to replicate all then data on the remaining rack, thus might saturate the disk space on this rack ! I can't find any documentation mentionning this " HDFS rebalance speed" . Also it looks to me that, if the number of replica factor is equal to the number of racks, there is no guarantee that there will be a replica spread in each rack. Do you confirm it ? Thanks in advance. rgds Laurent
... View more
07-13-2017
09:06 AM
Hello, I am in the process of improving the resilience of our hadoop clusters. We are using a twin-datacenter architecture; the hadoop cluster nodes are located in two different buildings separated by 10 km with Namenode HA activated. We are using a replica factor of 4 + 2 rack awareness (on rack per site). The replica factor of 4 is probably a bit "luxury", but it might protect against the lost of an entire rack (lost of a site) + the lost of some nodes on the remaining site. In case of losing en entire rack, I am wondering if HDFS will try to replicate the data on the remaining rack, thus we will get 4 replica on the same rack and overconsume space on the remaining rack ?...or will it "disable" the replica that is supposed to be located on the failed rack ? Does it make sense to create 4 racks (one for each replica) in order to ensure that the data will be replicated on the both sites in a balanced way (2x2) ? Many thanks in advance for your feedback. Regards Laurent
... View more
Labels:
- Labels:
-
Apache Hadoop
04-12-2017
07:59 AM
Hello, No solution to this issue, except re-installing the cluster! Before doing so, we have renamed the HDFS directories in order to preserve the data. rgds Laurent
... View more
12-12-2016
05:53 PM
Hello, We have finally re-installed the cluster (HDP 2.4 + Ambari 2.4.1), and the hosts are Ok now. Rgds Laurent
... View more
10-26-2016
07:38 AM
1 Kudo
@Artem Ervits The problem is not only visible from my PC, but also from all the Ambari users; it's not related to the browser. When accessing the "Host tab", the Web console log output mentions this error : { "status" : 400,
"message" : "The properties [Hosts/rack_i…nts/logging] specified in the request or predicate are not supported for the resource type Host."
}
...while trying to issue the following request : http://<ambari_server>:8080/api/v1/clusters/OCP_prod/hosts?fields=Hosts/rack_i%E2%80%A6nts/logging&page_size=25&from=0&sortBy=Hosts/host_name.asc&_=1477464892288%20500>:8080/api/v1/clusters/OCP_prod/hosts?fields=Hosts/rack_i%E2%80%A6nts/logging&page_size=25&from=0&sortBy=Hosts/host_name.asc&_=1477464892288%20500 Is it related to the "rack-aware" configuration of our cluster ?....because if I issue the request : http://:8080/api/v1/clusters/OCP_prod/hosts?fields=Hosts>:8080/api/v1/clusters/OCP_prod/hosts?fields=Hosts ... it works fine. So the hosts are correctly seen in the database. Rgds Laurent
... View more
10-25-2016
10:36 AM
Venkat, The Ambari agents have also been upgraded. As I said earlier, The hosts table seems Ok, Ambari server is able to manage the hosts (for instance, adding new services) , it's just this Web page which doesn't list the hosts !!?
... View more
10-25-2016
09:12 AM
Hello @Artem Ervits The hosts are presents in the Host table, and we can definitely see them via a curl command. It looks like the issue resides on the generation of the Web page. No, we didn't execute the reset command. rgds Laurent
... View more