Support Questions

Find answers, ask questions, and share your expertise

Ambari blueprints and Namenode metadata

avatar
Expert Contributor

A few questions on ambari blueprint and Rest API

Can someone please answer these questions?

1) After installing an HA cluster via blueprint, do we need to initialize the metadata on secondary namenode manually?

2) The same question in case we reboot the secondary namenode.

3) Can we monitor the status of Ambari agents through REST API?

1 ACCEPTED SOLUTION

avatar

If you install HDFS HA via blueprint, you dont have to initialize anything manually afterwards, its all done during the blueprint rollout.

You can monitor the status of Ambari Agents or hosts via

http://<ambari server>:8080/api/v1/hosts/<hostname>

This will return a lot of information about the host, e.g. disk info, running services (inkl. process id), last heartbeat of ambari agent, health status, etc.

{
  "href" : "http://example.com:8080/api/v1/hosts/horton01.example.com",
  "Hosts" : {
    "cluster_name" : "bigdata",
    "cpu_count" : 2,
    "desired_configs" : null,
    "disk_info" : [
      {
        "available" : "5922140",
        "device" : "/dev/vda1",
        "used" : "13670764",
        "percent" : "70%",
        "size" : "20641404",
        "type" : "ext3",
        "mountpoint" : "/"
      ...
    ],
    "host_health_report" : "",
    "host_name" : "horton01.cloud.hortonworks.com",
    "host_state" : "HEALTHY",
    "host_status" : "HEALTHY",
    "ip" : "172.24.68.17",
....
....

	"agentTimeStampAtReporting" : 1453826633797,
        "serverTimeStampAtReporting" : 1453826633829,
        "liveServices" : [
          ...
        ]
      },
      "umask" : 18,
      ....
    },
    "last_heartbeat_time" : 1453826643874,
    "last_registration_time" : 1452849291890,
    "os_arch" : "x86_64",
    "os_family" : "redhat6",
    "os_type" : "centos6",
    "ph_cpu_count" : 2,
    "public_host_name" : "horton01.example.com",
    "rack_info" : "/14",
    "recovery_report" : {
      "summary" : "DISABLED",
      "component_reports" : [ ]
    },
    "recovery_summary" : "DISABLED",
    "total_mem" : 7543576
  },
  "alerts_summary" : {
    "CRITICAL" : 0,
    "MAINTENANCE" : 0,
    "OK" : 18,
    "UNKNOWN" : 0,
    "WARNING" : 1
  }

View solution in original post

8 REPLIES 8

avatar

If you install HDFS HA via blueprint, you dont have to initialize anything manually afterwards, its all done during the blueprint rollout.

You can monitor the status of Ambari Agents or hosts via

http://<ambari server>:8080/api/v1/hosts/<hostname>

This will return a lot of information about the host, e.g. disk info, running services (inkl. process id), last heartbeat of ambari agent, health status, etc.

{
  "href" : "http://example.com:8080/api/v1/hosts/horton01.example.com",
  "Hosts" : {
    "cluster_name" : "bigdata",
    "cpu_count" : 2,
    "desired_configs" : null,
    "disk_info" : [
      {
        "available" : "5922140",
        "device" : "/dev/vda1",
        "used" : "13670764",
        "percent" : "70%",
        "size" : "20641404",
        "type" : "ext3",
        "mountpoint" : "/"
      ...
    ],
    "host_health_report" : "",
    "host_name" : "horton01.cloud.hortonworks.com",
    "host_state" : "HEALTHY",
    "host_status" : "HEALTHY",
    "ip" : "172.24.68.17",
....
....

	"agentTimeStampAtReporting" : 1453826633797,
        "serverTimeStampAtReporting" : 1453826633829,
        "liveServices" : [
          ...
        ]
      },
      "umask" : 18,
      ....
    },
    "last_heartbeat_time" : 1453826643874,
    "last_registration_time" : 1452849291890,
    "os_arch" : "x86_64",
    "os_family" : "redhat6",
    "os_type" : "centos6",
    "ph_cpu_count" : 2,
    "public_host_name" : "horton01.example.com",
    "rack_info" : "/14",
    "recovery_report" : {
      "summary" : "DISABLED",
      "component_reports" : [ ]
    },
    "recovery_summary" : "DISABLED",
    "total_mem" : 7543576
  },
  "alerts_summary" : {
    "CRITICAL" : 0,
    "MAINTENANCE" : 0,
    "OK" : 18,
    "UNKNOWN" : 0,
    "WARNING" : 1
  }

avatar
Expert Contributor

@Jonas Straub Thanks for your quick reply but could you also answer the question # 2

The same question in case we reboot the secondary namenode?

avatar

Just to avoid confusion, there is no Secondary Namenode, there will only be a Namenode-1 and Namenode-2, one of these two namenodes is always the active one, the other one will be a standby NN.

During the Blueprint rollout Ambari will execute several steps (see here) to initialize the Journalnodes, format ZK Znode and distribute all the metadata (fsimage,etc.). These steps are only executed once. If you restart an active Namenode, it will transition to a standby state first and make the other NN the active one, once thats done it restarts, so the metadata is not reinitialized again.

avatar
Master Mentor

@rbalam has this been resolved? Can you provide a solution or accept the best answer?

avatar
Rising Star

"see here" link is unavailable now. Could you please have an updated link? @Jonas Straub

I am installing HDP3.0 HA using ambari 2.7 blueprint. The namenodes failed to start due to error "NameNode is not formatted." This did not happen to HDP 2.6 using almost the same blueprint. Look like ambari rollout failed to distribute the metadata. Is there way to get the root cause? The log did not help much. Thanks.

avatar
Master Mentor

@Lian Jiang

Can you share your blueprint after stripping iit of site specific info?

avatar
Rising Star

Geoffrey,

blueprint for HDP3.0

Thanks.

avatar
Rising Star

By understanding ambari source code, the problem is solved. I need to specify below in blueprint:

	 "cluster-env": {
	  "properties": {
	  "dfs_ha_initial_namenode_active": "%HOSTGROUP::master_host_group%",
	  "dfs_ha_initial_namenode_standby": "%HOSTGROUP::master2_host_group%"
	  }
	 }