I have uploaded a file that is a copy of the Ambari information about the failure of the HBASE Check event during an Ambari 2.2.1 install of HDP 22.214.171.124. This happens during the initial install of a cluster on Openstack instances. What is wrong? What do I need to do to correct it on the next install?
Also is there a way to rerun the HBASE Check on an existing cluster to see if the problem is corrected?
Here is the list of Services I have installed:
HDFS MapReduce2 YARN Tez Hive HBase Pig Sqoop Oozie ZooKeeper Storm Flume Ambari Metrics Kafka Mahout Spark
Hi @Michael Brown, the log is saying that HBase client couldn't access HBase related data stored in your Zookeeper. So, check is your Zookeeper up and running, and run its service check to confirm. You can run HBase service check again by selecting HBase on your Ambari dashboard, select "Service actions" on the top right, and then select "Run service check".
Hi @Predrag Minovic, your answer helps a lot. This happened during an Ambari initial install and start of that full list of services given above. I think it means that Ambari started an HBASE Check before what is needed was up and running. As soon as I had access to the Ambari GUI, I started all the services that were not started and the ZooKeeper and HBASE Check now run successfully.
Glad to hear that all your services are up and running, and service checks passed. Regarding your initial problem, Ambari is actually scheduling service checks only after required services have started, and in a particular order, so for exapmle, ZK check was supposed to happen before HBase check. In other words, the issue you experienced was not supposed to happen. But maybe some download took longer, ZK startup was delayed, etc, many things are happening during the initial install. If you can afford a short cluster down-time, you can try "Stop All" and then "Start All" (from "Actions" at the bottom on the left pad of Ambari Dashboard) to follow the sequence of starting services and doing service checks. And finally, if you plan to install more clusters, my recommendation is to download required Ambari and HDP .repo and rpm files (tar.gz), build a local repository, and install binaries from there, not from the Internet (user guide here).