04-16-2017 09:30 PM
I have 2 host in my hadoop cluster. But one host using default Cloudera Manger Installer get more Roles. So, Memory usage from that host is high (3.6 GiB / 3.9 GiB). Can I move some roles from that host to another host that already in the cluster? If i can, What roles that i can move to another host?
Solved! Go to Solution.
04-17-2017 08:23 AM
i have 6 servers in my environment out of which 2 are commissioned few days back.
now out of 6servers in one server we have 25 roles assigned to it..now i want to migrate these roles so that the cluster should be in a balanced state..i am listing you the roles that are assigned on each server..please help me in balancing the cluster and also please give me the commands to run the balancer on the cluster.
here are the roles that are currently runing on the cluster.
|HDFS DataNode||HDFS DataNode||HBase RegionServer||HBase Thrift Server||HBase RegionServer||HBase RegionServer|
|HDFS DataNode||HBase Master||HDFS DataNode||HDFS DataNode|
|Disk Usage||Disk Usage||Hive Gateway||HBase RegionServer||Hive Gateway||Hive Gateway|
|2.5 TB/22.7 TB||2.5 TiB / 22.7 TiB||Impala Daemon||HDFS Balancer||Impala Daemon||Impala Daemon|
|Kafka Broker||HDFS NameNode||Kafka Broker||Kafka Broker|
|Physical Memory||Physical Memory||Solr Server||HDFS SecondaryNameNode||Solr Server||Solr Server|
|10.6 GB/ 252 GB||11.9 GiB / 252.2 GiB||Spark Gateway||Hive Gateway||Spark Gateway||Spark Gateway|
|YARN (MR2 Included) NodeManager||Hive Metastore Server||YARN (MR2 Included) NodeManager||YARN (MR2 Included) NodeManager|
|Swap space||Swap space||HiveServer2||ZooKeeper Server||ZooKeeper Server|
|0 GB /32 GB||0 B / 32 GiB||Disk Usage||Hue Server|
|1.6 TiB / 3.2 TiB||Impala Catalog Server||Disk Usage||Disk Usage|
|CDH Version : CDH 5||CDH Version : CDH 5||Impala Daemon||2.3 TiB / 4.4 TiB||2.2 TiB / 4.4 TiB|
|OS : Redhat6.6||OS : Redhat6.6||Physical Memory||Impala StateStore|
|119.1 GiB / 252.2 GiB||Cloudera Management Service Alert Publisher||Physical Memory||Physical Memory|
|Cloudera Management Service Event Server||198.3 GiB / 1.5 TiB||162.4 GiB / 504.6 GiB|
|Swap space||Cloudera Management Service Host Monitor|
|0 B / 32 GiB||Cloudera Management Service Service Monitor||Swap space||Swap space|
|Oozie Server||0 B / 32 GiB||0 B / 32 GiB|
|CDH Version : CDH 5||Solr Server|
|OS : Redhat6.6||Spark Gateway||CDH Version : CDH 5||CDH Version : CDH 5|
|Spark History Server||OS : Redhat6.6||OS : Redhat6.6|
|YARN (MR2 Included) JobHistory Server|
|YARN (MR2 Included) NodeManager|
|YARN (MR2 Included) ResourceManager|
|3.1 TiB / 4.4 TiB|
|254.5 GiB / 1.5 TiB|
|0 B / 32 GiB|
|CDH Version : CDH 5|
|OS : Redhat6.6|
04-17-2017 09:13 AM
04-17-2017 10:19 AM
I would recommend the following:
a- the 2 DataNodes:
1- HDFS DataNode
2- YARN (MR2 Included) NodeManager
3- Impala Daemon
4- Spark Gateway ( to overcome some bugs in different cloudera manager versions)
b- All Cloudera manager services to be on VM machine with 16-24 GB memory will be enough
c- Hive roles,Oozie, Hue, and the database ( mysql) to be on another VM as it allow you to create new one in case of disaster.
d- 2 other physical servers, both have:
1- YARN (MR2 Included) ResourceManager
2- HDFS NameNode
3- ZooKeeper Server
one of them has the following additional roles:
4- YARN (MR2 Included) JobHistory Server
5- Spark History Server
6- HDFS Balancer
and the other to have the following roles:
7-Impala Catalog Server
8- Impala StateStore
e- i would recommend small node for the HDFS,Spark and Hive Gateway, if not it can be added to the DataNodes.
04-17-2017 11:34 AM
Thanks for your reply....how to add reassign the roles to the other servers.can i do it by using the cloudera manager or i have to delete the service or add it again ..or can i stop the services by using CM and move the roles to other servers with the help of ui..
Kindly please let me know the steps how to proceed..
Thanks and Regards,
04-17-2017 11:54 AM
Each role should be migrated. moved separaetly, for example:
Name node, journal node and the failover controller:
make sure to deco the service on the nodes you want to stop the service on, wait till the under replicated block to closed and then delete the role from the nodes
adding datanode role to another node can be done by cloudera manager by going to the HDFS ---> instances and add role instances
for NodeManager, only stop and add the same as in the HDFS
regarding hive and oozie, you need to stop the services, backup the database and then create the database on the new node.
If it not yet aproduction environment, i would recommend to create it from scratch as it safe, easier and faster
04-17-2017 12:41 PM
Most of the points are covered by others, my 2 cents are
1. Configure NameNode and SecondaryNameNode on different servers. Make sure both the server has similar configuration
2. Configure Cloudera manager, Hive, Impala on different servers.
3. If possible keep Cloudera manager and YARN Resource manager in different nodes
04-18-2017 06:07 AM
Sorry for my late respose and thanks for your reply.
I have 2 hosts and already move Zookeeper Server, Hue Server, Hive Gateway and Hive Metastore Server successfully via Instance pages in Cloudera Manager. All Service Started and Status Good Health except HDFS. There is an Error from HDFS: Bad : 658 under replicated blocks in the cluster. 658 total blocks in the cluster. Percentage under replicated blocks: 100.00%. Critical threshold: 40.00%
Result from : # $ hdfs fsck / -files -blocks -locations
Total size: 530731534 B
Total dirs: 2293
Total files: 662
Total symlinks: 0
Total blocks (validated): 658 (avg. block size 806582 B)
Minimally replicated blocks: 658 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 658 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 1.0
Corrupt blocks: 0
Missing replicas: 658 (50.0 %)
Number of data-nodes: 1
Number of racks: 1
FSCK ended at Tue Apr 18 07:16:43 UTC 2017 in 426 milliseconds
The filesystem under path '/' is HEALTHY
There is Missing replicas: 658 (50.0 %)
My HDFS Replication Factor is set up to 2 and run hdfs dfs -setrep 2 /, but still not solved.
Maybe you have solutions for this.
04-18-2017 06:30 AM
As you can see from the report that you have 1 DataNode Live and not 2.
Number of data-nodes: 1
Can you please make sure that you see 2 Live DataNodes at the HDFS