Member since
10-21-2019
9
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3150 | 01-16-2020 02:06 AM |
01-16-2020
02:06 AM
Got the solution. Follow this guide and things should come back up fine https://cwiki.apache.org/confluence/display/AMBARI/Cleaning+up+Ambari+Metrics+System+Data
... View more
01-13-2020
10:24 PM
We have an HDP 3.1.0 cluster (single node) without Hbase.
The ambari-metrics-collector is constantly stopping with the following error
2020-01-13 22:09:55,247 INFO org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, tries=6, retries=16, started=4195 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.NotServingRegionException: METRIC_AGGREGATE_UUID,<^\xC6\xEA\x9A\x7F^\x02\x8C\x98\xE5\xCD\x83\xAC\xEC\xDB\x00\x00\x00\x00\x00\x00\x00\x00,1556526940170.07aab84f4d80a29d771d895a77185269. is not online on xxxxxxx,61320,1578982133933
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3273)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3250)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1414)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2964)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3301)
at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
, details=row 'a�^T}^NK�+`�u�h��^@^Ao�u=0' on table 'METRIC_AGGREGATE_UUID' at region=METRIC_AGGREGATE_UUID,<^\xC6\xEA\x9A\x7F^\x02\x8C\x98\xE5\xCD\x83\xAC\xEC\xDB\x00\x00\x00\x00\x00\x00\\
x00\x00,1556526940170.07aab84f4d80a29d771d895a77185269., hostname=xxxxxx,61320,1563230648592, seqNum=103729
I have a couple of questions.
1. Is Hbase a mandatory service for clusters running hdp 3.1.0 +?
2. How do we recreate missing/accidentally deleted znodes like ams-hbase-secure znode in zookeeper? Which component handles this. Many components throw errors like NoNode for xxxxx in zookeeper
3. We tried renaming the znode parent path in config but to no luck.
Any help would be appreciated.
... View more
Labels:
10-21-2019
08:26 PM
@cjervis Anyone who can help me out here?
... View more
10-21-2019
10:34 AM
I am building a 4 node machine and i want to enable hdfs and rm HA. The hdp version is 3.1.0 and has the yarn ats-hbase app always running. I wanted to know is it mandatory to destroy the app as mentioned https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.1/data-operating-system/content/remove_ats_hbase_before_switching_between_clusters.html before enabling HA? Thanks a lot for your help in advance
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache YARN