Member since
09-03-2020
118
Posts
6
Kudos Received
0
Solutions
12-12-2024
10:24 AM
1 Kudo
@JSSSS Looks like either you are running out of space in hdfs or the three DataNodes are too busy to acknowledge the request and causing the below exception. Please check if the HDFS is not reached its full capacity. org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: [ Datanode None is not chosen since required storage types are unavailable for storage type DISK.
... View more
12-12-2024
10:13 AM
1 Kudo
@divyank The HDFS NameNode in safemode may happen due to its waiting for the DataNodes to send the block report. If that is not completed it may remain in the safemode. Ensure all the DataNodes started properly and no errors with it and connected to NameNode. You may review the NameNode logs and look for what its waiting for to exit safemode. Manually exiting the safemode may cause data loss of un reported blocks. If you have doubt, don't hesitate to contact Cloudera Support.
... View more
12-12-2024
10:08 AM
1 Kudo
@irshan When you add balancer as a role in the HDFS cluster, it indeed will show as not started. So its an expected one. Coming to your main query, it could be possible that when you run the balancer, the balancer threshold could be with in the default percentage of 10, so it won't move the blocks. You may have to reduce the balance threshold and try again.
... View more
12-12-2024
10:02 AM
1 Kudo
@Remme Though the procedure you followed might have helped you, with a larger cluster with TBs of Data, this is not a viable option. In that case, would advise working with Cloudera Support.
... View more
12-12-2024
09:48 AM
1 Kudo
@cc_yang It could be possible you may have enabled HDFS space quota to the directory and the directory may have reached to it hard limit, causing the file upload throws insufficient space message. Refer more about HDFS quota as below. https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html
... View more
12-12-2024
09:40 AM
Though one can do the manual intervention to fix the under replicated blocks, HDFS has matured a lot and the NameNode will take care of fixing the under replicated blocks on its own. The drawback for doing the manual step is that it may add additional load to the NameNode Operations and may cause performance degradation with existing jobs. So if you plan to do manually you may do it at least business hours or over the weekend.
... View more
12-12-2024
09:28 AM
@darshanhira , There is not much changes to the NFS gateway end at the CDP 7.1.8, the issue you might be facing due to the underlying Linux issue. Please check if there is any stale nfs process that is blocking the NFS Gateway startup. Also please check if by chance any other process holding the port 2049, if so this may also cause the NFS gateway service startup. Also, please refer to our documentation as well. https://docs.cloudera.com/cdp-private-cloud-base/7.3.1/scaling-namespaces/topics/hdfs-using-the-nfs-gateway-for-accessing-hdfs.html
... View more