Member since
05-25-2021
11
Posts
0
Kudos Received
0
Solutions
11-19-2021
02:03 AM
Hello! You can remove the data from HDFS using the following command #hdfs dfs -rm -R -skipTrash <Extra-Data-folder> #hdfs dfs -rm -r /tmp/spark This issue is caused by having too many Datanodes with too high of disk utilization thus reducing the total number of Datanodes available for write requests. As a result, Datanodes which are still available for writes will be targeted at a higher rate - increasing their transceiver activity to the point of being "overloaded". In order to correct this, efforts should be made to reduce the disk utilization of Datanodes in the cluster whose disk capacity limits have been reached. Adding additional drives to increase storage space, deleting unwanted/non-critical data from HDFS, or adding additional Datanodes to the cluster are all worthwhile solutions to address this problem. There is also a workaround available to address Datanode rejections due to higher-than-normal transceiver volumes. However, it should be noted that this is not a long-term solution, and should only be used temporarily: Change the 'dfs.namenode.replication.considerLoad' parameter to equal 'false' under HDFS > Configurations > "NameNode Advanced Configuration Snippet (Safety Valve)"in Cloudera Manager. This will effectively tell the NameNode to ignore current transceiver activity when choosing a Datanode for block placement. This can have unintended consequences if left on permanently, as the NameNode can potentially overwhelm Datanodes with too many requests - the considerLoad parameter is there to prevent that. Hopefully the provided solution will help resolve the issue. Regards, Vaishnavi Nalawade
... View more
11-19-2021
01:40 AM
Hello Koffi, Please delete and recreate the Solr Shards. If you open Solr Web Ui, you will observe node/nodes to be down. Please login to SSH, and check the data directory for Infra Solr on the node that is down, you will notice that no core directories exist. Also check the same on another Infra Solr node which is active, and you will see core directories present. This is the cause of all cores being down on a specific node. To fix this issue, delete the ranger_audits collection & let it create again. 1. Stop Solr Infra and Ranger services 2. Delete : ### curl -k --negotiate -u : "http://$(hostname -f):8886/solr/admin/collections?action=DELETE&name=ranger_audits" 3. Start Ranger and Solr. Hopefully this will resolve the issue. Regards, Vaishnavi Nalawade
... View more