Member since
01-16-2018
385
Posts
23
Kudos Received
47
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
171 | 04-14-2022 01:29 AM | |
574 | 04-14-2022 01:12 AM | |
160 | 04-14-2022 12:58 AM | |
307 | 03-29-2022 01:13 AM | |
349 | 03-29-2022 01:00 AM |
01-10-2021
10:22 PM
Hello @ShamsN Kindly update the Post, if you have solved the issue. If you continue to face the issue, Let us know & we can assist you. We requested additional details based on your Post on 12/16. - Smarak
... View more
01-10-2021
10:20 PM
Hello @ASIF123 Checking whether the concerned issue posted by your team has been resolved. If Yes, Kindly mark the Post as Solved. If No, Kindly review our Post dated 12/18 & share the outcome of the Action Plan shared. - Smarak
... View more
01-10-2021
10:19 PM
Hello @Madhureddy Thanks for using Cloudera Community. Based on the post, Table " Meterevents" was loaded with 3K records & an Insert Select Operation was performed against "events_Hbase" from "Meterevents" table. The "events_Hbase" table is showing 1200 records. We wish to check upon the following details: 1. Connect to HBase Shell & confirm the count of "HbaseEvents" table, 2. If the count of "HbaseEvents" table is 1200, Check for the Uniqueness of the 1st Column being used as ":key" while loading the Table. It's likely the RowKey is being repeated, causing an updated Version being utilised, thereby reducing the row-count. 3. Your team can check upon the above by creating 2 Tables & insert 10 unique rows (By RowKey Column) into 1 Table with 10 rows (Having, 5 Unique RowKey Values) into the 2nd Table. Next, Create 2 Hive Table using HBaseStorageHandler & perform the Insert Select SQL. Then, Check the Row Count. - Smarak
... View more
01-10-2021
10:07 PM
Hello @SurajP Thanks for using Cloudera Community. You mentioned the SQL works in Zeppelin Notebook on HDP v3.0 while doesn't work on HDP v2.6. The error posted by you points to the fact that the Table "enrichedEvents" isn't found. The same isn't likely caused by any Configuration issues, rather pointing to the absence of the Table. You haven't mentioned the Interpreter used, yet would request you to query the Metadata to confirm if the concerned Object "enrichedEvents" is listed in the Metadata. Accordingly, you can proceed with the SQL. - Smarak
... View more
12-21-2020
06:56 AM
1 Kudo
Hello @vidanimegh Thanks for using Cloudera Community. To answer your query, (I) There is no dependency between the 2 Services with respect to which one should be installed first. If you have to pick, First Authenticate & then, Authorise i.e. Setup Kerberos first before installing Ranger. (II) Again, no caution required as there is no explicit dependency. (III) For HDP Stack, there is no requirement to enable TLS/SSL before Kerberos. Recommend to perform the Steps (Enabling Kerberos | Enabling Ranger | Enable TLS/SSL) via Ambari for easier management. Let us know if you have any further queries. - Smarak
... View more
12-18-2020
12:45 AM
Hello @Anks2411 Thanks for sharing the Cause. To your query, Yes, HBase Balancer should be enabled & " balance_switch" should be set as "true". Once you have no further queries, Kindly mark the Post as Solved as well. - Smarak
... View more
12-18-2020
12:43 AM
Hello @TGH Yes, After doing any HBCK2 Changes, Restart the Service as the Components have a Cached Version of the Metadata as well. Let us know how things goes. - Smarak
... View more
12-18-2020
12:42 AM
1 Kudo
Hello @ASIF123 Thanks for using Cloudera Community. For the Orphan Region, We need to confirm the Source. Will recommend checking the Region IDs in "hbase:meta" Table & HBase Data Directory. If the Region IDs aren't present in "hbase:meta" Table with only Region Directory present, Check if the Region Directory has any StoreFiles or "recovered.edits" file. If no StoreFiles or "recovered.edits" files are present, It's likely the Regions are part of Split or Merge (Verifiable by HMaster Logs) & we can safely sideline the Region Directories. If no StoreFiles are present yet "recovered.edits" files are present, Again check if the Regions are part of Split or Merge (Verifiable by HMaster Logs), Use WALPlayer to replay the "recovered.edits" files to be on safer side & then, sideline the Region Directories. - Smarak
... View more
12-17-2020
12:47 AM
Hello @Anks2411 Thanks for using Cloudera Community. You shall need to check the Logs of " cdh-dn-28.prod.mcs.az-eastus2.mob.nuance.com" RegionServer to confirm the reasoning for Region "8808c0e1917bf0b4acea2d83d9548463" being FAILED_CLOSE. For any Region to be moved from RegionServer A to RegionServer B, the Region has been to closed on RegionServer A before being opened on RegionServer B. In your case, the Region is failing to close on RegionServer A & the Logs would confirm the reasoning for the same. - Smarak
... View more
12-16-2020
07:15 AM
Hello @Hadoop_Admin Thanks for using Cloudera Community. To reiterate, your Team enabled Replication from ClusterA to ClusterB & seeing Data Loss. By Data Loss, your Team means the Record Count on Source & Target isn't matching. This is observed for Large Table with ~2TB Size. Kindly confirm the Process being used for Customer to compare the Record Count. Is VerifyRep being utilised for the concerned purpose. Next, HBase Replication is supposed to be Asynchronous i.e. some Lags are expected, if the Source Table is being loaded. Confirm if the Command [status 'replication'] is reporting any Replication Lag. Next, We need to establish if the RowCount Difference is Static or Dynamic during a period of No-Load on Source Table (If feasible). If Source Table has 100 Rows & Target Table has 90 Rows & remains so, We can assume 10 Rows are the Difference. If Target Table shows 91>92>93... Rows, We can assume Replication is catching up. Finally, Any Audit Record showing any Delete Ops on the Target Table. - Smarak
... View more
12-16-2020
07:06 AM
Hello @ShamsN Kindly update the Post, if you have solved the issue. If you continue to face the issue, Let us know & we can assist you. - Smarak
... View more
12-16-2020
07:04 AM
@TGH No worries, if you are planning to drop the Table anyhow, Let's use the following approaches: 1. You (Your previous Team) have deleted the Table Level Directories from HDFS. 2. In HBase:Meta, We have 1 Row per Table Region & 1 Row for Table as well. 3. Use "get 'hbase:meta','<RegionID RowKey>'". Note that your Team can get the Scan Output to check the format of the RowKey for the concerned Table, which includes the RegionID. 4. After confirming the Output from "get" Command, Use the "deleteall" Command with the same Argument to remove the Rows of Table's Region. Finally, Remove the Table Level Info as well. 5. Restart the HBase Service to clear the Cache as well. Recommend to test the above on 1 Sample Table (Create Table > Remove Region Directory > Remove Meta Info > Restart HBase > Check HBCK Output). - Smarak
... View more
12-11-2020
07:24 AM
Hello @TGH I downloaded the HBCK2 Tool from the Steps shared & I could see the " extraRegionsInMeta" listed in the "README.md" file. - Smarak
... View more
12-11-2020
06:09 AM
Hello @Manoj690 Thanks for contacting Cloudera Community. While taking a Full Backup, you are facing IOException while waiting on the Lock. Kindly share the Output of Command " hbase backup history" along with "list_locks" from HBase Shell. The requested details would confirm the status of any running Backup & Locks placed on the Tables. Additionally, Share the HBase Version wherein you are using the required Backup Command. - Smarak
... View more
12-11-2020
06:00 AM
Hello @ShamsN Thanks for using Cloudera Community. You mentioned the HBase RegionServers are shutting down & mentioned an error, yet we don't any error message excluding the UI. Please share the RegionServer logs to confirm the cause of termination. Additionally, What is the Error you are receiving while listing Tables. - Smarak
... View more
12-09-2020
05:06 AM
Hello @TGH Thanks for the response. To your queries, (I) HBCK2 has extraRegionsInMeta for removing Regions from HBase:Meta, which doesn't have any HDFS Directories. Running the HBCK2 Tool with the concerned command shows the Regions in Meta, which aren't present in HDFS & adding a Fix flag (-f) remove them as well. (II) Using Delete Command on HBase:Meta isn't an issue, yet we generally avoid making any changes to the HBase:Meta manually. It's more of a recommendation to avoid any manual oversight causing HBase:Meta corruption. (III) We can change the Region State via HBCK2 setRegionState Command. Note that the HBCK2 Git Page recommend using the Command as a last resort, considering the risky nature. If Customer is aware of the risk associated with the concerned Command, they can run the Command to set the TableState or RegionState. - Smarak
... View more
12-08-2020
09:23 AM
Hello @lihao This is an Old Post, yet we can use "-skip" flag of HBCK2 Tool to ensure the HBCK2 Tool doesn't check the Master Version. The "-skip" flag is documented via Link [1], which is the Git Page of HBCK2 Tool. - Smarak [1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2
... View more
12-08-2020
09:12 AM
1 Kudo
Hello @kvinod As Cluster Replication wasn't being used based on the fact that "list_peers" isn't showing any Peer, It's likely the CleanerChore Thread wasn't performing its duties. Note that WALs are moved to oldWALs once the Last SequenceIDs of the WALs have been persisted to Disk via MemStore Flush. In other words, oldWALs being present doesn't necessarily means that the WALs are being persisted for replication. Now, the Cleanup of oldWALs is CleanerChore Thread responsibility. As we covered above, the HBase Service Restart covered the HMaster Restart, which would ensure the CleanerChore Thread is spawned afresh. Let me know if the above answers your queries. - Smarak
... View more
12-08-2020
06:26 AM
Hello @tuk If the Post by Pabitra assisted you, Kindly mark the Post as Solution. If you utilised any other approach, Kindly share the details in the post as well. Thanks, Smarak
... View more
12-08-2020
06:20 AM
Hello @TGH Sharing the Steps for building the HBCK2 Jar using Git reference & additionally, refer the Post via [1] for the details on building HBCK2 Tool as well. - Smarak [1] https://community.cloudera.com/t5/Support-Questions/How-to-get-hbck2-tool-for-CDH-6-3-2/m-p/295867/highlight/true#M218004
... View more
12-08-2020
06:15 AM
Hello @ma_lie1 This is an Old Post, yet sharing the details to close the Post & for future reference. You can build the HBCK2 Tool from the HBCK2 Git Page. Sharing the Steps below (Expect git & maven to be installed). The Command Usage is documented via Link [1]: - Smarak [1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2
... View more
12-08-2020
06:05 AM
Hello @TGH Thanks for using Cloudera Community. You had Region-In-Transition (RIT) & the HDFS Directory has been removed for the Regions along with the ZNode being removed, yet HBase reports RIT. You wish to fix the RIT issue by removing the Meta Table entries as RIT avoids Balancer run. In HBase v2 (CDH v6.3.x), the MasterProcWALs is critical for any Procedure, which are stuck or blocked. You mentioned a lot of procedures (Disable|Delete) being observed. The graceful manner for your Team to manage the requirement is to use the HBCK2 Tool. You can build the HBCK2 using the Link [1]. Next, You can use the HBCK2 Tool to bypass the Procedure (PIDs) associated with the Table, for which the Region Directories have been removed. Once any PID is bypassed, the HMaster UI Page (Locks & Procedures ) Section would show the PID as "Bypass". After ensuring the required PIDs are bypassed, Restart the HMaster Service & use the HBCK2 Tool to remove the Region entries in Meta, for which the HDFS Region Directories are removed. Use "bypass" & "extraRegionsInMeta" HBCK2 Command as documented in Link [1]. Alternatively, You can Stop HMaster > Remove MasterProcWALs (After confirming no RUNNABLE Procedures excluding the PID associated with the Table for which Region Directory have been removed) > Start HMaster. However, this isn't an ideal approach & you can encounter "Master Is Initialising" issue, for which HBCK2 Tool is required. The "Master Is Initialising" context is captured in Link [1] as well. - Smarak [1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2
... View more
12-08-2020
05:48 AM
Hello @kvinod Thanks for the Update. The Replication ZNode being created is expected after restart. The Checkbox concerning HBase Replication being left unchecked indicates Replication being disabled yet I have observed couple of cases wherein a CM Config wasn't passed to Service Level, causing certain unexpected behaviour. The explicit addition of the Parameter was to ensure the Service (HBase in this case) is aware of the Configuration. Or, Master Restart (Performed via HBase Restart) may have resolved the issue, by spawning a new CleanerChoreThread. As such, the issue is likely with the HBase Service being unaware of Replication being disabled or HMaster CleanerChore Thread. By explicitly adding the HBase Replication as False & restarting the HBase Service, We covered the 2 possibilities. - Smarak Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
12-08-2020
03:29 AM
Hello @Manoj690 Thanks for using Cloudera Community. Your concern is Phoenix Table created on top of a Restored Table isn't showing the Non-PrimaryKey Columns correctly. Can you share the Steps used by your Team to backup the Table & subsequent restore. Additionally, Whether the Backup & Restore are being performed within the same Cluster along with the Distribution being used (For Versioning Check). - Smarak
... View more
12-08-2020
03:23 AM
Hello @kvinod Thanks for using Cloudera Community. Your concern is HBase OldWALs on HDFS Path "/hbase/oldWALs" are occupying a lot of space. HBase Replication isn't being used & TTL is set to 1 Minute. The HMaster Trace Logs capture the CleanerChore with verbose logging, yet I wish to check if you have tried the following 2 options: 1. Restart the HMaster Service to confirm if any issues with CleanerChore, 2. The Parameter " hbase.replication" is set correctly to False via the Steps shared under Section [1]. 3. If the "/hbase/replication" has any entries. If no Replication is utilised (HBase Replication or Lily Indexer), Try removing the "/hbase/replication" ZNode & restart the HMaster Service. - Smarak [1] CM=> HBase=> Configuration=> Advanced=> HBase Service Advanced Configuration Snippet (Safety Valve) for hbase-site.xml
... View more
12-02-2020
09:46 PM
1 Kudo
Hello @nanda_bigdata
Sharing the Solution to ensure the Post is marked Completed. From WAL Reader, We confirmed the Writes to the RegionServer WAL pertains to 1 ColumnFamily only, indicating the Writes are arriving to 1 ColumnFamily only. It was confirmed that the wrong Hbase Configuration was being used by the Application. After ensuring the correct Hbase Configuration was used by Application, the issue was Fixed.
- Smarak
... View more
11-28-2020
06:52 AM
Hello @ibr If you are referring to the Supported Database for CDP Private Cloud Base (Which is the SDX for Private Cloud Experience), the list of Supported Databases for CDP Private Cloud Base, wherein the Databases for Metadata, Authorisation is maintained is shared in the Link [1]. [1] https://docs.cloudera.com/cdp-private-cloud/latest/release-guide/topics/cdpdc-database-requirements.html
... View more
11-22-2020
09:28 PM
Hello @Manoj690 RegionServer is a Service & your team can add the RegionServer Service interactively using via Ambari (HDP) or Cloudera Manager (CDH or CDP). - Smarak
... View more
11-12-2020
10:51 AM
Hello @lenu If you have Replication enabled, WALs are likely to be persisted until the WALs are replicated. If you aren't using HBase Replication, Ensure there are no Peers (via "list_peers") & " hbase.replication" Property is false. If the oldWALs aren't removed, Enable TRACE Logging for the HBase Master Service, which would print the CleanerChore Thread removing or skipping any entries. - Smarak
... View more
11-12-2020
10:43 AM
Hello @ebythomaspanick It appears you are hitting HBASE-20616. If you have verified that no other Procedures are in RUNNABLE State (Except for Truncate & Enable for the concerned Table), Sidelining the MasterProcWALs & Clearing the Temp Directory "/apps/hbase/data/.tmp" would ensure the TruncateTableProcedure aren't retried. Stop the Masters (Active & Standby) during the concerned Step to avoid any issues. - Smarak
... View more
- « Previous
- Next »