Member since
07-30-2020
219
Posts
45
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
429 | 11-20-2024 11:11 PM | |
486 | 09-26-2024 05:30 AM | |
1081 | 10-26-2023 08:08 AM | |
1852 | 09-13-2023 06:56 AM | |
2126 | 08-25-2023 06:04 AM |
05-07-2024
05:31 AM
Try to bypass the stuck procedures using the hbck2 Jar and take the Region servers for a restart.
... View more
05-06-2024
01:48 AM
1 Kudo
@MrNicen firstly, You will need to check the RS log of l230-n2.<SERVER>. As per the WARn, the region is already reported as OPEN by the same RS on which its trying again to open and going into a OPENING state. It seems there are SCPs/ multiple assign procedures are running in the backend trying to open a region that is already open. Considering that you are on a version that can neither use hbck2 jar nor the hbck1 commands, you can give try the below : 1) Stop both the Master 2) Move the contents of MasterProcWALs to a backup location # hdfs dfs -mv /hbase/MasterProcWALs/* /tmp/ 3) Start the Masters. If the above doesn't solve the issue, I suggest raising a support case with Cloudera to review the logs.
... View more
02-23-2024
05:19 AM
What error are you getting while writing to HDFS?
... View more
02-21-2024
03:39 AM
1 Kudo
If you have HA enabled, then try to copy the edits from the Standby namenode to the Active Namenode and restart.
... View more
10-26-2023
08:08 AM
Hi @nsup This is a known issue with Py3 compatibility in hbase-indexer. This should be fixed in CDP 7.1.8 CHF6 and in CDP 7.1.9 release.
... View more
10-24-2023
01:34 AM
To enable Ranger authorization for HDFS on the same cluster we should not select the Ranger service dependency but we should select the 'Enable Ranger Authorization' checkbox instead of the Ranger service under HDFS. In the base cluster, even if you select / check the box for "Ranger_service", the CM seem to indicate saving configuration successfully, but that box will never be checked, and a warning message will be logged in CM server logs indicating "CyclicDependencyConfigUpdateListener - Unsetting dependency from service hdfs to service ranger to prevent cyclic dependency". Refer the below article which is for Solr-Ranger dependency. https://my.cloudera.com/knowledge/WARN-quotUnsetting-dependency-from-servicequot-when-Ranger?id=329275
... View more
10-18-2023
11:14 PM
@jayes You will need to use --hiveconf to specify the hive configuration in the jdbc url. The below example command can be used to set the replication factor to 1. beeline -u "jdbc:hive2://machine1.dev.domain.com:2181/default" --hiveconf dfs.replication=1 -n hive -p hive Check the replication factor : > set dfs.replication; You can modify your beeline command accordingly.
... View more
10-10-2023
12:53 AM
@LLL_ Both the information be it either from hadoop cmdline or from Web UI will come from the in-memory data structures that the Namenode stores in its metadata. The NameNode maintains this metadata in its memory for fast access. So the fsimage is your persistent metadata that the Namenode reads while starting up and keeps this information in its memory to present a dynamic state of the file system when the user queries via Web UI.
... View more
10-09-2023
05:51 AM
Can you try to run the below command on all 3 Zookeeper instances? echo "stat" | nc localhost 2181 | grep Mode
... View more
10-04-2023
11:41 PM
@amrahmed The Zookeeper snapshot size might have grown bigger and the followers are not able to sync with the leader. You may try to increase the sync and init limit for the zookeeper and check again. Zookeeper => Configuration ==> Search for 'limit' increase initLimit and syncLimit - initLimit from 10 to 30 - syncLimit from 5 to 25 Restart Zookeeper
... View more