Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Move Role to another host

avatar
Explorer

I have 2 host in my hadoop cluster. But one host using default Cloudera Manger Installer get more Roles. So, Memory usage from that host is high (3.6 GiB / 3.9 GiB). Can I move some roles from that host to another host that already in the cluster? If i can, What roles that i can move to another host?

 

Thanks

1 ACCEPTED SOLUTION

avatar
Master Collaborator

Hi @adam990,

 

As you can see from the report that you have 1 DataNode Live and not 2.

 

Number of data-nodes:        1

 

Can you please make sure that you see 2 Live DataNodes at the HDFS

 

 

View solution in original post

11 REPLIES 11

avatar
Master Collaborator

Hi Adam,

 

Basically you can move any role you need to the other node.

 

Can you please list the roles in the default node?

avatar

Hi Fawze,

 

i have 6 servers in my environment out of which 2 are commissioned few days back.

 

now out of 6servers in one server we have 25 roles assigned to it..now i want to migrate these roles so that the cluster should be in a balanced state..i am listing you the roles that are assigned on each server..please help me in balancing the cluster and also please give me the commands to run the balancer on the cluster.

 

here are the roles that are currently runing on the cluster.

 

HDFS DataNode HDFS DataNode HBase RegionServer HBase Thrift Server HBase RegionServer HBase RegionServer
    HDFS DataNode HBase MasterHDFS DataNode HDFS DataNode
Disk Usage Disk Usage Hive Gateway HBase RegionServerHive Gateway Hive Gateway
2.5 TB/22.7 TB 2.5 TiB / 22.7 TiB Impala Daemon HDFS BalancerImpala Daemon Impala Daemon
    Kafka Broker HDFS NameNodeKafka Broker Kafka Broker
Physical Memory Physical Memory Solr Server HDFS SecondaryNameNodeSolr Server Solr Server
10.6 GB/ 252 GB 11.9 GiB / 252.2 GiB Spark Gateway Hive GatewaySpark Gateway Spark Gateway
    YARN (MR2 Included) NodeManager Hive Metastore ServerYARN (MR2 Included) NodeManager YARN (MR2 Included) NodeManager
Swap space Swap space   HiveServer2ZooKeeper Server ZooKeeper Server
0 GB /32 GB 0 B / 32 GiB Disk Usage Hue Server   
    1.6 TiB / 3.2 TiB Impala Catalog ServerDisk Usage Disk Usage
CDH Version : CDH 5 CDH Version : CDH 5   Impala Daemon2.3 TiB / 4.4 TiB 2.2 TiB / 4.4 TiB
OS : Redhat6.6 OS : Redhat6.6 Physical Memory Impala StateStore   
    119.1 GiB / 252.2 GiB Cloudera Management Service Alert PublisherPhysical Memory Physical Memory
      Cloudera Management Service Event Server198.3 GiB / 1.5 TiB 162.4 GiB / 504.6 GiB
    Swap space Cloudera Management Service Host Monitor   
    0 B / 32 GiB Cloudera Management Service Service MonitorSwap space Swap space
      Oozie Server0 B / 32 GiB 0 B / 32 GiB
    CDH Version : CDH 5 Solr Server   
    OS : Redhat6.6 Spark GatewayCDH Version : CDH 5 CDH Version : CDH 5
      Spark History ServerOS : Redhat6.6 OS : Redhat6.6
      YARN (MR2 Included) JobHistory Server   
      YARN (MR2 Included) NodeManager   
      YARN (MR2 Included) ResourceManager   
      ZooKeeper Server   
           
      Disk Usage    
      3.1 TiB / 4.4 TiB    
           
      Physical Memory    
      254.5 GiB / 1.5 TiB    
           
      Swap space    
      0 B / 32 GiB    
           
      CDH Version : CDH 5    
      OS : Redhat6.6    

avatar
Champion
As @Fawze mentioned, any role can be moved. From the info provide it looks like you are running 5 worker nodes and 1 master. Depending on the hardware you can make one or two of those workers into Masters; otherwise moving master roles onto worker nodes will potentially cause issues.

Some tips for moving, if the role connects to a backend DB, like the Hive Metastore, then you need to update the DB user account to reflect it running from the new hosts. Some other roles are tricky, like the Namenode, as you would need to migrate the metadata.

The two candidates to convert to masters from workers would be the two that just have the Datanode role on them. You can just do one if you don't need production level resiliency and redundancy in the master services.

First, remove the Nodemanager, HBase Regionserver, and Impala Daemon, Solr server role from the current master (the one with 25 roles).
Those are all worker roles. I would move the ResourceMananger, Spark History, Impala Catalog, Impala Statestore, and Secondary Namenode. The last one is the only tricky one and it should be fine as setting up a new one should sync from the Namenode.

I also recommend moving the Cloudera Manager roles off to a completely different server, possible a VM. It doesn't need to be on the cluster, is using up valuable resources, and can live in a VM. You will need to update all DB accounts related to them.

That is a start, we could go further but that should be enough to give you some breathing room.

Good luck.

avatar
Master Collaborator

I would recommend the following:

a- the 2 DataNodes:

1- HDFS DataNode
2- YARN (MR2 Included) NodeManager
3- Impala Daemon
4- Spark Gateway ( to overcome some bugs in different cloudera manager versions)

b- All Cloudera manager services to be on VM machine with 16-24 GB memory will be enough

c- Hive roles,Oozie, Hue, and the database ( mysql) to be on another VM as it allow you to create new one in case of disaster.

d- 2 other physical servers, both have:

1- YARN (MR2 Included) ResourceManager
2- HDFS NameNode
3- ZooKeeper Server

one of them has the following additional roles:

4- YARN (MR2 Included) JobHistory Server
5- Spark History Server
6- HDFS Balancer

and the other to have the following roles:

7-Impala Catalog Server
8- Impala StateStore

e- i would recommend small node for the HDFS,Spark and Hive Gateway, if not it can be added to the DataNodes.

 

 

 

avatar

Thanks for your reply....how to add reassign the roles to the other servers.can i do it by using the cloudera manager or i have to delete the service or add it again ..or can i stop the services by using CM and move the roles to other servers with the help of ui..

 

Kindly please let me know the steps how to proceed..

 

Thanks and Regards,

Sanjeev kishore.

avatar
Master Collaborator

Hi Sanjeev,

 

Each role should be migrated. moved separaetly, for example:

 

Name node, journal node and the failover controller:


https://www.cloudera.com/documentation/enterprise/5-6-x/topics/admin_nn_migrate_roles.html


Cloudera manager:


https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cm_ag_restore_server.html

zookeeper:

https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cm_mc_zookeeper_service.html#Replacin...

 

for datanode:

make sure to deco the service on the nodes you want to stop the service on, wait till the under replicated block to closed and then delete the role from the nodes

adding datanode role to another node can be done by cloudera manager by going to the HDFS ---> instances and add role instances

 

 

for NodeManager, only stop and add the same as in the HDFS

 

regarding hive and oozie, you need to stop the services, backup the database and then create the database on the new node.

 

If it not yet  aproduction environment, i would recommend to create it from scratch as it safe, easier and faster

 

avatar
Champion

@SanjeevkishoreY

 

Most of the points are covered by others, my 2 cents are

 

1. Configure NameNode and SecondaryNameNode on different servers. Make sure both the server has similar configuration
2. Configure Cloudera manager, Hive, Impala on different servers.
3. If possible keep Cloudera manager and YARN Resource manager in different nodes

avatar
Explorer

Hi Fawze,

 

Sorry for my late respose and thanks for your reply.

 

I have 2 hosts and already move Zookeeper Server, Hue Server, Hive Gateway and Hive Metastore Server successfully via Instance pages in Cloudera Manager. All Service Started and Status Good Health except HDFS. There is an Error from HDFS: Bad : 658 under replicated blocks in the cluster. 658 total blocks in the cluster. Percentage under replicated blocks: 100.00%. Critical threshold: 40.00%

 

Result from : # $ hdfs fsck / -files -blocks -locations

 

Status: HEALTHY
 Total size:    530731534 B
 Total dirs:    2293
 Total files:    662
 Total symlinks:        0
 Total blocks (validated):    658 (avg. block size 806582 B)
 Minimally replicated blocks:    658 (100.0 %)
 Over-replicated blocks:    0 (0.0 %)
 Under-replicated blocks:    658 (100.0 %)
 Mis-replicated blocks:        0 (0.0 %)
 Default replication factor:    2
 Average block replication:    1.0
 Corrupt blocks:        0
 Missing replicas:        658 (50.0 %)
 Number of data-nodes:        1
 Number of racks:        1
FSCK ended at Tue Apr 18 07:16:43 UTC 2017 in 426 milliseconds

The filesystem under path '/' is HEALTHY

 

There is Missing replicas:        658 (50.0 %)

 

My HDFS Replication Factor is set up to 2 and run hdfs dfs -setrep 2 /, but still not solved.

 

Maybe you have solutions for this.

 

Thanks

 

 

 

 

 

avatar
Master Collaborator

Hi @adam990,

 

As you can see from the report that you have 1 DataNode Live and not 2.

 

Number of data-nodes:        1

 

Can you please make sure that you see 2 Live DataNodes at the HDFS