Member since
07-30-2020
216
Posts
40
Kudos Received
59
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
276 | 09-26-2024 05:30 AM | |
988 | 10-26-2023 08:08 AM | |
1745 | 09-13-2023 06:56 AM | |
1975 | 08-25-2023 06:04 AM | |
1448 | 08-17-2023 12:51 AM |
02-21-2024
03:39 AM
1 Kudo
If you have HA enabled, then try to copy the edits from the Standby namenode to the Active Namenode and restart.
... View more
10-26-2023
08:08 AM
Hi @nsup This is a known issue with Py3 compatibility in hbase-indexer. This should be fixed in CDP 7.1.8 CHF6 and in CDP 7.1.9 release.
... View more
10-24-2023
01:34 AM
To enable Ranger authorization for HDFS on the same cluster we should not select the Ranger service dependency but we should select the 'Enable Ranger Authorization' checkbox instead of the Ranger service under HDFS. In the base cluster, even if you select / check the box for "Ranger_service", the CM seem to indicate saving configuration successfully, but that box will never be checked, and a warning message will be logged in CM server logs indicating "CyclicDependencyConfigUpdateListener - Unsetting dependency from service hdfs to service ranger to prevent cyclic dependency". Refer the below article which is for Solr-Ranger dependency. https://my.cloudera.com/knowledge/WARN-quotUnsetting-dependency-from-servicequot-when-Ranger?id=329275
... View more
10-18-2023
11:14 PM
@jayes You will need to use --hiveconf to specify the hive configuration in the jdbc url. The below example command can be used to set the replication factor to 1. beeline -u "jdbc:hive2://machine1.dev.domain.com:2181/default" --hiveconf dfs.replication=1 -n hive -p hive Check the replication factor : > set dfs.replication; You can modify your beeline command accordingly.
... View more
10-10-2023
12:53 AM
@LLL_ Both the information be it either from hadoop cmdline or from Web UI will come from the in-memory data structures that the Namenode stores in its metadata. The NameNode maintains this metadata in its memory for fast access. So the fsimage is your persistent metadata that the Namenode reads while starting up and keeps this information in its memory to present a dynamic state of the file system when the user queries via Web UI.
... View more
10-09-2023
05:51 AM
Can you try to run the below command on all 3 Zookeeper instances? echo "stat" | nc localhost 2181 | grep Mode
... View more
10-04-2023
11:41 PM
@amrahmed The Zookeeper snapshot size might have grown bigger and the followers are not able to sync with the leader. You may try to increase the sync and init limit for the zookeeper and check again. Zookeeper => Configuration ==> Search for 'limit' increase initLimit and syncLimit - initLimit from 10 to 30 - syncLimit from 5 to 25 Restart Zookeeper
... View more
10-04-2023
11:30 PM
@Noel_0317 The directory /hadoop/dfs/name/ might be your Namenode data directory that contains the metadata in the form of fsimage and edits. So won't recommend deleting it if that's the case. You can confirm if this directory is indeed the NN data directory by checking the HDFS configuration. If the cluster is working and still taking writes, you can verify if the Namenode Data dir has been changed to a different mount point if the latest data available on it is from July.
... View more
09-29-2023
07:25 AM
@Noel_0317 If you want to know how the Datanode got upto 705GB, you will need to do a du at the Linux filesystem level for the datanode blockpool. For ex : du -s -h /data/dfs/dn/current/BP-331341740-172.25.35.200-1680172307700/ /data/dfs/dn/ ==> Datanode data dir BP ==> Blockpool used by the datanode The above should return 705GB. The Blockpool will contain the subdir which would be holding the File blocks present on this specific datanode. When you run 'hdfs dfs -du' it takes the entire HDFS storage into account.
... View more
09-13-2023
06:56 AM
@newtocm you can't pause the Balancer. You can kill it and start it again and it will try to balance the rest of the DFS data remaining to be balanced.
... View more