Support Questions

Find answers, ask questions, and share your expertise

HDP2.5 Hbase HA ambari blueprint.

avatar
New Contributor

I am trying to create a blueprint for installation of Hbase (Active - Passive) where I have two Namenode (Active-Passive). Issue I am experiencing is that the given blueprint is installing standby hbase master and not active hbase master. And even the hbase regional servers service is also not starting. Kindly check the blueprint mentioned below and let me know where I am making mistake.

{ "Blueprints": { "blueprint_name": "blueprint992c1c9a-b38d-4a8f-bb6f-b4e45f7f447d", "stack_version": "2.5", "stack_name": "HDP" }, "configurations": [ { "core-site": { "properties": { "ha.zookeeper.quorum": "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::slave_1%:2181", "fs.defaultFS": "hdfs:\/\/hdpcluster" } } }, { "hdfs-site": { "properties": { "dfs.namenode.https-address": "%HOSTGROUP::master_1%:50470", "dfs.nameservices": "hdpcluster", "dfs.namenode.http-address": "%HOSTGROUP::master_1%:50070", "dfs.client.failover.proxy.provider.hdpcluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider", "dfs.namenode.rpc-address.hdpcluster.nn1": "%HOSTGROUP::master_1%:8020", "dfs.namenode.rpc-address.hdpcluster.nn2": "%HOSTGROUP::master_2%:8020", "dfs.ha.fencing.methods": "shell(\/bin\/true)", "dfs.namenode.https-address.hdpcluster.nn1": "%HOSTGROUP::master_1%:50470", "dfs.ha.namenodes.hdpcluster": "nn1,nn2", "dfs.namenode.https-address.hdpcluster.nn2": "%HOSTGROUP::master_2%:50470", "dfs.ha.automatic-failover.enabled": "true", "dfs.namenode.shared.edits.dir": "qjournal:\/\/%HOSTGROUP::master_1%:8485;%HOSTGROUP::slave_1%:8485;%HOSTGROUP::master_2%:8485\/hdpcluster", "dfs.namenode.http-address.hdpcluster.nn1": "%HOSTGROUP::master_1%:50070", "dfs.namenode.http-address.hdpcluster.nn2": "%HOSTGROUP::master_2%:50070" } } } ], "host_groups": [ { "name": "master_1", "components": [ { "name": "HISTORYSERVER" }, { "name": "JOURNALNODE" }, { "name": "NAMENODE" }, { "name": "ZKFC" }, { "name": "HDFS_CLIENT" }, { "name": "MAPREDUCE2_CLIENT" }, { "name": "METRICS_MONITOR" }, { "name": "TEZ_CLIENT" }, { "name": "YARN_CLIENT" }, { "name": "ZOOKEEPER_CLIENT" }, { "name": "ZOOKEEPER_SERVER" } ], "cardinality": "1" }, { "name": "master_2", "components": [ { "name": "APP_TIMELINE_SERVER" }, { "name": "METRICS_COLLECTOR" }, { "name": "METRICS_GRAFANA" }, { "name": "METRICS_MONITOR" }, { "name": "RESOURCEMANAGER" }, { "name": "JOURNALNODE" }, { "name": "NAMENODE" }, { "name": "ZKFC" }, { "name": "HDFS_CLIENT" }, { "name": "MAPREDUCE2_CLIENT" }, { "name": "TEZ_CLIENT" }, { "name": "YARN_CLIENT" }, { "name": "ZOOKEEPER_CLIENT" }, { "name": "ZOOKEEPER_SERVER" } ], "cardinality": "1" }, { "name": "slave_1", "components": [ { "name": "DATANODE" }, { "name": "NODEMANAGER" }, { "name": "JOURNALNODE" }, { "name": "HBASE_MASTER" }, { "name": "HBASE_REGIONSERVER" }, { "name": "METRICS_MONITOR" }, { "name": "HDFS_CLIENT" }, { "name": "MAPREDUCE2_CLIENT" }, { "name": "TEZ_CLIENT" }, { "name": "YARN_CLIENT" }, { "name": "ZOOKEEPER_CLIENT" }, { "name": "ZOOKEEPER_SERVER" }, { "name": "HBASE_CLIENT" } ], "cardinality": "1" }, { "name": "slave_2", "components": [ { "name": "DATANODE" }, { "name": "NODEMANAGER" }, { "name": "METRICS_MONITOR" }, { "name": "HBASE_MASTER" }, { "name": "HBASE_REGIONSERVER" }, { "name": "HBASE_CLIENT" }, { "name": "HDFS_CLIENT" }, { "name": "MAPREDUCE2_CLIENT" }, { "name": "TEZ_CLIENT" }, { "name": "YARN_CLIENT" }, { "name": "ZOOKEEPER_CLIENT" } ], "cardinality": "1" } ] }

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Hi @Navdeep Singh, The problem you're experiencing is an issue with configuration.

I tried out the Blueprint you've posted here with a local vagrant cluster, and there were no exceptions in the ambari-server log, which seems to indicate that the Blueprint deployment completed without any unexpected errors.

I looked through the HBase logs, and noticed some exceptions when HBase attempts to connect to HDFS.

Generally, HBase must be configured to point towards the HDFS Namenode being used. In your particular case, the configuration was pointing to a "localhost" address, which is incorrect, since the NameNodes in your cluster are not deployed on the same instance as either of the HBase Master components.

If you add the following configuration block to your Blueprint, the deployment will work fine:

 {
      "hbase-site" : {
        "properties" : {
          "hbase.rootdir" : "hdfs://hdpcluster/apps/hbase/data"
        }
      }
   }

Since your HDFS configuration defines a name service at "hdpcluster", this must be used in the HBase configuration that points to the root directory in HDFS used by HBase.

I modified a local copy of your Blueprint with this change added, and the cluster deployed properly, and HBase started up as expected (Active and Standby nodes started, regsionservers started).

For Blueprint deployments, the HDFS HA settings are not automatically set by the Blueprints processor for services/components that depend upon HDFS. This should probably be updated in a future version of Ambari.

I've included a modified copy of your Blueprint below that seems to work fine now with the new configuration.

Hope this helps.

{
  "configurations" : [
    {
      "hdfs-site" : {
        "properties" : {
          "dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
          "dfs.client.failover.proxy.provider.hdpcluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
          "dfs.namenode.rpc-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:8020",
          "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::slave_1%:8485;%HOSTGROUP::master_2%:8485/hdpcluster",
          "dfs.namenode.http-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:50070",
          "dfs.namenode.http-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:50070",
          "dfs.ha.fencing.methods" : "shell(/bin/true)",
          "dfs.nameservices" : "hdpcluster",
          "dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
          "dfs.ha.namenodes.hdpcluster" : "nn1,nn2",
          "dfs.namenode.https-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:50470",
          "dfs.namenode.rpc-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:8020",
          "dfs.namenode.https-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:50470",
          "dfs.ha.automatic-failover.enabled" : "true"
        }
      }
    },
    {
      "core-site" : {
        "properties" : {
          "fs.defaultFS" : "hdfs://hdpcluster",
          "ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::slave_1%:2181"
        }
      }
    },
   {
      "hbase-site" : {
        "properties" : {
          "hbase.rootdir" : "hdfs://hdpcluster/apps/hbase/data"
        }
      }
   }
  ],
  "host_groups" : [
    {
      "components" : [
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "ZKFC"
        },
        {
          "name" : "JOURNALNODE"
        },
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "METRICS_MONITOR"
        }
      ],
      "configurations" : [ ],
      "name" : "master_1",
      "cardinality" : "1"
    },
    {
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "ZKFC"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "METRICS_GRAFANA"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "JOURNALNODE"
        },
        {
          "name" : "METRICS_COLLECTOR"
        }
      ],
      "configurations" : [ ],
      "name" : "master_2",
      "cardinality" : "1"
    },
    {
      "components" : [
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HBASE_MASTER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "HBASE_REGIONSERVER"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "HBASE_CLIENT"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "NODEMANAGER"
        }
      ],
      "configurations" : [ ],
      "name" : "slave_2",
      "cardinality" : "1"
    },
    {
      "components" : [
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "HBASE_CLIENT"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HBASE_MASTER"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "JOURNALNODE"
        },
        {
          "name" : "HBASE_REGIONSERVER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        }
      ],
      "configurations" : [ ],
      "name" : "slave_1",
      "cardinality" : "1"
    }
  ],
  "settings" : [ ],
  "Blueprints" : {
    "blueprint_name" : "blueprint992c1c9a-b38d-4a8f-bb6f-b4e45f7f447d",
    "stack_name" : "HDP",
    "stack_version" : "2.5",
    "security" : {
      "type" : "NONE"
    }
  }
}

View solution in original post

3 REPLIES 3

avatar
Expert Contributor

Hi @Navdeep Singh, The problem you're experiencing is an issue with configuration.

I tried out the Blueprint you've posted here with a local vagrant cluster, and there were no exceptions in the ambari-server log, which seems to indicate that the Blueprint deployment completed without any unexpected errors.

I looked through the HBase logs, and noticed some exceptions when HBase attempts to connect to HDFS.

Generally, HBase must be configured to point towards the HDFS Namenode being used. In your particular case, the configuration was pointing to a "localhost" address, which is incorrect, since the NameNodes in your cluster are not deployed on the same instance as either of the HBase Master components.

If you add the following configuration block to your Blueprint, the deployment will work fine:

 {
      "hbase-site" : {
        "properties" : {
          "hbase.rootdir" : "hdfs://hdpcluster/apps/hbase/data"
        }
      }
   }

Since your HDFS configuration defines a name service at "hdpcluster", this must be used in the HBase configuration that points to the root directory in HDFS used by HBase.

I modified a local copy of your Blueprint with this change added, and the cluster deployed properly, and HBase started up as expected (Active and Standby nodes started, regsionservers started).

For Blueprint deployments, the HDFS HA settings are not automatically set by the Blueprints processor for services/components that depend upon HDFS. This should probably be updated in a future version of Ambari.

I've included a modified copy of your Blueprint below that seems to work fine now with the new configuration.

Hope this helps.

{
  "configurations" : [
    {
      "hdfs-site" : {
        "properties" : {
          "dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
          "dfs.client.failover.proxy.provider.hdpcluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
          "dfs.namenode.rpc-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:8020",
          "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::slave_1%:8485;%HOSTGROUP::master_2%:8485/hdpcluster",
          "dfs.namenode.http-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:50070",
          "dfs.namenode.http-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:50070",
          "dfs.ha.fencing.methods" : "shell(/bin/true)",
          "dfs.nameservices" : "hdpcluster",
          "dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
          "dfs.ha.namenodes.hdpcluster" : "nn1,nn2",
          "dfs.namenode.https-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:50470",
          "dfs.namenode.rpc-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:8020",
          "dfs.namenode.https-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:50470",
          "dfs.ha.automatic-failover.enabled" : "true"
        }
      }
    },
    {
      "core-site" : {
        "properties" : {
          "fs.defaultFS" : "hdfs://hdpcluster",
          "ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::slave_1%:2181"
        }
      }
    },
   {
      "hbase-site" : {
        "properties" : {
          "hbase.rootdir" : "hdfs://hdpcluster/apps/hbase/data"
        }
      }
   }
  ],
  "host_groups" : [
    {
      "components" : [
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "ZKFC"
        },
        {
          "name" : "JOURNALNODE"
        },
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "METRICS_MONITOR"
        }
      ],
      "configurations" : [ ],
      "name" : "master_1",
      "cardinality" : "1"
    },
    {
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "ZKFC"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "METRICS_GRAFANA"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "JOURNALNODE"
        },
        {
          "name" : "METRICS_COLLECTOR"
        }
      ],
      "configurations" : [ ],
      "name" : "master_2",
      "cardinality" : "1"
    },
    {
      "components" : [
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HBASE_MASTER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        },
        {
          "name" : "HBASE_REGIONSERVER"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "HBASE_CLIENT"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "NODEMANAGER"
        }
      ],
      "configurations" : [ ],
      "name" : "slave_2",
      "cardinality" : "1"
    },
    {
      "components" : [
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "HBASE_CLIENT"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HBASE_MASTER"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "JOURNALNODE"
        },
        {
          "name" : "HBASE_REGIONSERVER"
        },
        {
          "name" : "MAPREDUCE2_CLIENT"
        }
      ],
      "configurations" : [ ],
      "name" : "slave_1",
      "cardinality" : "1"
    }
  ],
  "settings" : [ ],
  "Blueprints" : {
    "blueprint_name" : "blueprint992c1c9a-b38d-4a8f-bb6f-b4e45f7f447d",
    "stack_name" : "HDP",
    "stack_version" : "2.5",
    "security" : {
      "type" : "NONE"
    }
  }
}

avatar
Expert Contributor

Hi @Navdeep Singh, Did this resolve your Blueprint deployment issue?

If so, can you please accept this answer, in case it might be useful for others as well?

Thanks

avatar
New Contributor

Hi @rnettleton, Thanks a lot. Yes it worked for me.