Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 777 | 04-08-2025 06:48 AM | |
| 952 | 04-01-2025 07:20 AM | |
| 914 | 04-01-2025 07:15 AM | |
| 962 | 05-06-2024 06:09 AM | |
| 1500 | 05-06-2024 06:00 AM |
01-10-2021
10:19 PM
Hello @Madhureddy Thanks for using Cloudera Community. Based on the post, Table "Meterevents" was loaded with 3K records & an Insert Select Operation was performed against "events_Hbase" from "Meterevents" table. The "events_Hbase" table is showing 1200 records. We wish to check upon the following details: 1. Connect to HBase Shell & confirm the count of "HbaseEvents" table, 2. If the count of "HbaseEvents" table is 1200, Check for the Uniqueness of the 1st Column being used as ":key" while loading the Table. It's likely the RowKey is being repeated, causing an updated Version being utilised, thereby reducing the row-count. 3. Your team can check upon the above by creating 2 Tables & insert 10 unique rows (By RowKey Column) into 1 Table with 10 rows (Having, 5 Unique RowKey Values) into the 2nd Table. Next, Create 2 Hive Table using HBaseStorageHandler & perform the Insert Select SQL. Then, Check the Row Count. - Smarak
... View more
01-10-2021
10:07 PM
Hello @SurajP Thanks for using Cloudera Community. You mentioned the SQL works in Zeppelin Notebook on HDP v3.0 while doesn't work on HDP v2.6. The error posted by you points to the fact that the Table "enrichedEvents" isn't found. The same isn't likely caused by any Configuration issues, rather pointing to the absence of the Table. You haven't mentioned the Interpreter used, yet would request you to query the Metadata to confirm if the concerned Object "enrichedEvents" is listed in the Metadata. Accordingly, you can proceed with the SQL. - Smarak
... View more
12-21-2020
06:56 AM
1 Kudo
Hello @vidanimegh Thanks for using Cloudera Community. To answer your query, (I) There is no dependency between the 2 Services with respect to which one should be installed first. If you have to pick, First Authenticate & then, Authorise i.e. Setup Kerberos first before installing Ranger. (II) Again, no caution required as there is no explicit dependency. (III) For HDP Stack, there is no requirement to enable TLS/SSL before Kerberos. Recommend to perform the Steps (Enabling Kerberos | Enabling Ranger | Enable TLS/SSL) via Ambari for easier management. Let us know if you have any further queries. - Smarak
... View more
12-18-2020
12:45 AM
Hello @Anks2411 Thanks for sharing the Cause. To your query, Yes, HBase Balancer should be enabled & "balance_switch" should be set as "true". Once you have no further queries, Kindly mark the Post as Solved as well. - Smarak
... View more
12-18-2020
12:43 AM
Hello @TGH Yes, After doing any HBCK2 Changes, Restart the Service as the Components have a Cached Version of the Metadata as well. Let us know how things goes. - Smarak
... View more
12-18-2020
12:42 AM
1 Kudo
Hello @ASIF123 Thanks for using Cloudera Community. For the Orphan Region, We need to confirm the Source. Will recommend checking the Region IDs in "hbase:meta" Table & HBase Data Directory. If the Region IDs aren't present in "hbase:meta" Table with only Region Directory present, Check if the Region Directory has any StoreFiles or "recovered.edits" file. If no StoreFiles or "recovered.edits" files are present, It's likely the Regions are part of Split or Merge (Verifiable by HMaster Logs) & we can safely sideline the Region Directories. If no StoreFiles are present yet "recovered.edits" files are present, Again check if the Regions are part of Split or Merge (Verifiable by HMaster Logs), Use WALPlayer to replay the "recovered.edits" files to be on safer side & then, sideline the Region Directories. - Smarak
... View more
12-17-2020
12:47 AM
Hello @Anks2411 Thanks for using Cloudera Community. You shall need to check the Logs of "cdh-dn-28.prod.mcs.az-eastus2.mob.nuance.com" RegionServer to confirm the reasoning for Region "8808c0e1917bf0b4acea2d83d9548463" being FAILED_CLOSE. For any Region to be moved from RegionServer A to RegionServer B, the Region has been to closed on RegionServer A before being opened on RegionServer B. In your case, the Region is failing to close on RegionServer A & the Logs would confirm the reasoning for the same. - Smarak
... View more
12-16-2020
07:15 AM
Hello @Hadoop_Admin Thanks for using Cloudera Community. To reiterate, your Team enabled Replication from ClusterA to ClusterB & seeing Data Loss. By Data Loss, your Team means the Record Count on Source & Target isn't matching. This is observed for Large Table with ~2TB Size. Kindly confirm the Process being used for Customer to compare the Record Count. Is VerifyRep being utilised for the concerned purpose. Next, HBase Replication is supposed to be Asynchronous i.e. some Lags are expected, if the Source Table is being loaded. Confirm if the Command [status 'replication'] is reporting any Replication Lag. Next, We need to establish if the RowCount Difference is Static or Dynamic during a period of No-Load on Source Table (If feasible). If Source Table has 100 Rows & Target Table has 90 Rows & remains so, We can assume 10 Rows are the Difference. If Target Table shows 91>92>93... Rows, We can assume Replication is catching up. Finally, Any Audit Record showing any Delete Ops on the Target Table. - Smarak
... View more
12-16-2020
07:06 AM
Hello @ShamsN Kindly update the Post, if you have solved the issue. If you continue to face the issue, Let us know & we can assist you. - Smarak
... View more
12-16-2020
07:04 AM
@TGH No worries, if you are planning to drop the Table anyhow, Let's use the following approaches: 1. You (Your previous Team) have deleted the Table Level Directories from HDFS. 2. In HBase:Meta, We have 1 Row per Table Region & 1 Row for Table as well. 3. Use "get 'hbase:meta','<RegionID RowKey>'". Note that your Team can get the Scan Output to check the format of the RowKey for the concerned Table, which includes the RegionID. 4. After confirming the Output from "get" Command, Use the "deleteall" Command with the same Argument to remove the Rows of Table's Region. Finally, Remove the Table Level Info as well. 5. Restart the HBase Service to clear the Cache as well. Recommend to test the above on 1 Sample Table (Create Table > Remove Region Directory > Remove Meta Info > Restart HBase > Check HBCK Output). - Smarak
... View more