Member since
07-30-2020
211
Posts
34
Kudos Received
58
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
699 | 10-26-2023 08:08 AM | |
1259 | 09-13-2023 06:56 AM | |
1490 | 08-25-2023 06:04 AM | |
1083 | 08-17-2023 12:51 AM | |
512 | 08-04-2023 12:36 AM |
05-15-2024
05:29 AM
If the current stat in hbase:meta is OPEN, I won't suggest performing any other action to change the state in meta. I suspect there are some procedures running in the backend trying to assign the region again. Do you see any procedure as such in the "Procedure & locks" section under the Hbase Master Web UI?
... View more
05-10-2024
07:09 AM
If the Region server reports that the region is already OPEN, try to scan the hbase:meta table from the hbase shell and check what state that region is in. If its still in OPENING state in meta, try to change its state to OPEN. Do it for one of the region and see if that brings down the RIT count to 133.
... View more
05-07-2024
05:31 AM
Try to bypass the stuck procedures using the hbck2 Jar and take the Region servers for a restart.
... View more
05-06-2024
01:48 AM
1 Kudo
@MrNicen firstly, You will need to check the RS log of l230-n2.<SERVER>. As per the WARn, the region is already reported as OPEN by the same RS on which its trying again to open and going into a OPENING state. It seems there are SCPs/ multiple assign procedures are running in the backend trying to open a region that is already open. Considering that you are on a version that can neither use hbck2 jar nor the hbck1 commands, you can give try the below : 1) Stop both the Master 2) Move the contents of MasterProcWALs to a backup location # hdfs dfs -mv /hbase/MasterProcWALs/* /tmp/ 3) Start the Masters. If the above doesn't solve the issue, I suggest raising a support case with Cloudera to review the logs.
... View more
02-23-2024
05:19 AM
What error are you getting while writing to HDFS?
... View more
02-21-2024
03:39 AM
1 Kudo
If you have HA enabled, then try to copy the edits from the Standby namenode to the Active Namenode and restart.
... View more
10-26-2023
08:08 AM
Hi @nsup This is a known issue with Py3 compatibility in hbase-indexer. This should be fixed in CDP 7.1.8 CHF6 and in CDP 7.1.9 release.
... View more
10-24-2023
01:34 AM
To enable Ranger authorization for HDFS on the same cluster we should not select the Ranger service dependency but we should select the 'Enable Ranger Authorization' checkbox instead of the Ranger service under HDFS. In the base cluster, even if you select / check the box for "Ranger_service", the CM seem to indicate saving configuration successfully, but that box will never be checked, and a warning message will be logged in CM server logs indicating "CyclicDependencyConfigUpdateListener - Unsetting dependency from service hdfs to service ranger to prevent cyclic dependency". Refer the below article which is for Solr-Ranger dependency. https://my.cloudera.com/knowledge/WARN-quotUnsetting-dependency-from-servicequot-when-Ranger?id=329275
... View more
10-18-2023
11:14 PM
@jayes You will need to use --hiveconf to specify the hive configuration in the jdbc url. The below example command can be used to set the replication factor to 1. beeline -u "jdbc:hive2://machine1.dev.domain.com:2181/default" --hiveconf dfs.replication=1 -n hive -p hive Check the replication factor : > set dfs.replication; You can modify your beeline command accordingly.
... View more
10-10-2023
12:53 AM
@LLL_ Both the information be it either from hadoop cmdline or from Web UI will come from the in-memory data structures that the Namenode stores in its metadata. The NameNode maintains this metadata in its memory for fast access. So the fsimage is your persistent metadata that the Namenode reads while starting up and keeps this information in its memory to present a dynamic state of the file system when the user queries via Web UI.
... View more