Community Articles
Find and share helpful community-sourced technical articles
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

1) Download the Ambari Integration module and HDFS transparency connector. You can get it here.

2) Stop all the services from Ambari.

Ambari -> Actions -> Stop All

3) After successfully stopping all the services, Go to Spectrum Scale service and Unintegrate transparency.

SpectrumScale -> Service Actions -> Unintegrate Transparency

This step will replace the HDFS transparency modules with native HDFS and add back Secondary Namenode.

4) Delete the Spectrum Scale service. Type "delete" to confirm deletion.

 SpectrumScale -> Service Actions -> Delete Service

5) Extract the tar file downloaded in Step 1 in your Ambari Server node and run the mPack uninstaller.

./SpectrumScaleMPackUninstaller.py

The uninstaller prompts for few values such as Ambari IP, username etc. Enter them.

6) Performing above steps will delete the service from Ambari. Restart Ambari server and Start all services.

Ambari -> Actions -> Start All

If you were using a existing ESS clusters, the nodes would be still part of the ESS cluster. Remove the nodes from ESS cluster as well. To remove the nodes, perform below steps.

1) On any of the ESS cluster node, run mmlscluster

[root@xxx-ems1 ~]# mmlscluster
GPFS cluster information
========================
  GPFS cluster name:         xxx.yyy.net
  GPFS cluster id:           123123123123132123123
  GPFS UID domain:           xxx.yyy.net
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR
 Node  Daemon node name                IP address     Admin node name                  Designation
------------------------------------------------------------------------------------------------------------------------------------------
   1   xxx-yyy1-hs.gpfs.net            10.10.99.11    xxx-yyy1-hs.gpfs.net             quorum-manager-perfmon
   2   xxx-yyy2-hs.gpfs.net            10.10.99.12    xxx-yyy2-hs.gpfs.net             quorum-manager-perfmon
   3   xxx-ems1-hs.gpfs.net            10.10.99.13    xxx-ems1-hs.gpfs.net             quorum-perfmon
  62   hdp-gpds-node-6.openstacklocal  10.101.76.62   hdp-gpds-node-6.openstacklocal
  63   hdp-gpds-node-5.openstacklocal  10.101.76.40   hdp-gpds-node-5.openstacklocal
  64   hdp-gpds-node-4.openstacklocal  10.101.76.60   hdp-gpds-node-4.openstacklocal

This is the sample response for mmcluster commands. In the above output, Node 1,2 and 3 are the ESS nodes and Node 62,63 and 64 are the HDP cluster nodes which were enrolled into the ESS cluster. We will go ahead and delete the HDP nodes.

2) Delete the node by running the below command

mmdelnode -N hdp-gpds-node-6.openstacklocal

Run this command for every HDP node. After deleting all nodes you can confirm if the nodes are really deleted by running "mmlscluster" again.

Hope this helps :)

282 Views
Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎10-13-2017 10:38 AM
Updated by:
 
Contributors
Top Kudoed Authors