Member since
01-16-2018
607
Posts
48
Kudos Received
106
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
382 | 05-06-2024 06:09 AM | |
508 | 05-06-2024 06:00 AM | |
549 | 05-06-2024 05:51 AM | |
608 | 05-01-2024 07:38 AM | |
646 | 05-01-2024 06:42 AM |
12-16-2020
07:15 AM
Hello @Hadoop_Admin Thanks for using Cloudera Community. To reiterate, your Team enabled Replication from ClusterA to ClusterB & seeing Data Loss. By Data Loss, your Team means the Record Count on Source & Target isn't matching. This is observed for Large Table with ~2TB Size. Kindly confirm the Process being used for Customer to compare the Record Count. Is VerifyRep being utilised for the concerned purpose. Next, HBase Replication is supposed to be Asynchronous i.e. some Lags are expected, if the Source Table is being loaded. Confirm if the Command [status 'replication'] is reporting any Replication Lag. Next, We need to establish if the RowCount Difference is Static or Dynamic during a period of No-Load on Source Table (If feasible). If Source Table has 100 Rows & Target Table has 90 Rows & remains so, We can assume 10 Rows are the Difference. If Target Table shows 91>92>93... Rows, We can assume Replication is catching up. Finally, Any Audit Record showing any Delete Ops on the Target Table. - Smarak
... View more
12-11-2020
06:09 AM
Hello @Manoj690 Thanks for contacting Cloudera Community. While taking a Full Backup, you are facing IOException while waiting on the Lock. Kindly share the Output of Command "hbase backup history" along with "list_locks" from HBase Shell. The requested details would confirm the status of any running Backup & Locks placed on the Tables. Additionally, Share the HBase Version wherein you are using the required Backup Command. - Smarak
... View more
12-09-2020
05:15 AM
>From CML, you could access Greenplum via JDBC.
... View more
12-08-2020
06:26 AM
Hello @tuk If the Post by Pabitra assisted you, Kindly mark the Post as Solution. If you utilised any other approach, Kindly share the details in the post as well. Thanks, Smarak
... View more
12-08-2020
06:15 AM
Hello @ma_lie1 This is an Old Post, yet sharing the details to close the Post & for future reference. You can build the HBCK2 Tool from the HBCK2 Git Page. Sharing the Steps below (Expect git & maven to be installed). The Command Usage is documented via Link [1]: - Smarak [1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2
... View more
12-08-2020
03:47 AM
backup command running with hbase super user hbase backup create full hdfs://hostname:port/backup -t table_name restore command hbase restore hdfs://hostname:port/backup -t table_name we did this on same cluster
... View more
11-22-2020
09:28 PM
Hello @Manoj690 RegionServer is a Service & your team can add the RegionServer Service interactively using via Ambari (HDP) or Cloudera Manager (CDH or CDP). - Smarak
... View more
11-12-2020
10:51 AM
Hello @lenu If you have Replication enabled, WALs are likely to be persisted until the WALs are replicated. If you aren't using HBase Replication, Ensure there are no Peers (via "list_peers") & "hbase.replication" Property is false. If the oldWALs aren't removed, Enable TRACE Logging for the HBase Master Service, which would print the CleanerChore Thread removing or skipping any entries. - Smarak
... View more
11-12-2020
10:43 AM
Hello @ebythomaspanick It appears you are hitting HBASE-20616. If you have verified that no other Procedures are in RUNNABLE State (Except for Truncate & Enable for the concerned Table), Sidelining the MasterProcWALs & Clearing the Temp Directory "/apps/hbase/data/.tmp" would ensure the TruncateTableProcedure aren't retried. Stop the Masters (Active & Standby) during the concerned Step to avoid any issues. - Smarak
... View more