Member since
03-22-2017
63
Posts
18
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1938 | 07-08-2023 03:09 AM | |
4511 | 10-21-2021 12:49 AM | |
2083 | 04-01-2021 05:31 AM | |
2608 | 03-30-2021 04:23 AM | |
4830 | 03-23-2021 04:30 AM |
03-04-2021
10:38 AM
Hi @dv_conan, Similar issue is addressed here - https://community.cloudera.com/t5/Support-Questions/failed-to-execute-command-install-yarn-mapreduce-framework/td-p/301804 Please refer and make the necessary changes to directory permissions and let us know if that helped you.
... View more
03-04-2021
10:30 AM
Hello @samglo , Please note Solr CDCR is not supported in CDP yet. Refer to Cloudera blog on Solr CDCR (Cross Data Center Replication) support: - https://blog.cloudera.com/backup-and-disaster-recovery-for-cloudera-search/ Solr CDCR The future holds the promise of a Solr to Solr replication feature as well, a.k.a. CDCR. This is still maturing upstream and will need some time to further progress before it can be considered for mission critical production environments. Once it matures we will evaluate its value in addition to all our existing options of recovery for Search. The above solutions, presented in this blog, are production-proven and provides a very good coverage along with flexibility for today’s workloads. However, you can refer apache document on Solr CDCR below for some information about setup: - https://solr.apache.org/guide/6_6/cross-data-center-replication-cdcr.html or Cloudera Community article - https://community.cloudera.com/t5/Community-Articles/How-to-setup-cross-data-center-replication-in-SolrCloud-6/ta-p/247945
... View more
03-04-2021
10:06 AM
Hello @nj20200 It seems there is an older/previous version of openssl-devel package (openssl-libs-1.0.2k-19.el7.x86_64) is installed, which is causing the installation failure of new version openssl-devel package (openssl-devel-1.0.1e-60.el7.x86_64). So instead of installing the package, update the openssl-devel package by running "#yum update openssl-devel with -force option" or just remove the previous package and install the new version of openssl-devel package.
... View more
03-04-2021
07:25 AM
Hello @uxadmin please note that block count threshold configuration is intended for DataNodes only. This is a DataNode health test that checks for whether the DataNode has too many blocks. It's because having too many blocks on a DataNode may affect the DataNode's performance. There's no hard limit on the # of blocks writable to a DN, as block size is merely a logical concept, not a physical layout. However, the block count alert serves to indicate an early warning to a growing number of small files issue. While your DN can handle a lot of blocks in general, going too high will cause performance issues. Your processing speeds may get lower if you keep a lot of tiny files on HDFS (depends on your use-case of course) so would be worth looking into. You can find the block count threshold in HDFS config by navigating to CM > HDFS > Configuration > DataNode Block Count Thresholds When the block counts on each DN goes above the threshold, CM triggers an alert. So you need to adjust the threshold value based on the block counts on each DN. You can determine the block counts on each DN, navigating to CM > HDFS > WebUI > Active NN > DataNodes tab > Block counts column under Datanode section. Hope this helps.
... View more
03-02-2021
09:09 AM
1 Kudo
Hello @kolli_sandeep , it seems the failover controllers are down in the cluster. Please follow the steps here [1] and start the Failover Controller roles which will transition the NameNdoes to Active/Standby state. You need to follow below steps; Stop the FailoverController Roles under HDFS > Instances page Remove the HA state from ZK. On a ZooKeeper server host, run zookeeper-client. Execute the following to remove the configured nameservice. This example assumes the name of the nameservice is nameservice1. You can identify the nameservice from the Federation and High Availability section on the HDFS Instances tab: rmr /hadoop-ha/nameservice1 (If you don't see any znode /hadoop-ha in ZK znode list, skip the step) After removing the HA znode in ZK, Go to CM and Click the HDFS > Instances > Federation and High Availability > Actions Under Actions menu, Select Actions > Initialize High Availability State in ZooKeeper. Then start the Failover Controllers role ( CM > Instances > Select FailoverControllers > Actions for selected > Start) Verify the NameNdoe State and if you don't see the active/standby state of NN, If any failure, just Restart the HDFS service [1] https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html
... View more
03-02-2021
12:06 AM
Hello @raghurok , Could you please check now and see if you still getting the timeout error. I believe the timeout is due to some network glitch or maintenance activity. I hope you will be able to access it now.
... View more
03-01-2021
09:06 AM
1 Kudo
Hello @muslihuddin , Please note, while enabling HA, CM puts all 3 Journal Nodes into a single group call "Default Group" by default assuming you are going to use the same config value for the 3 JN directories. Since you are using /app/jn for one node and /data/jn for the other 2 JN nodes, it created two separate JN config groups. However, to prevent the CM alert, you can mention /data/jn in the JN default group config so that 2 JNs will be part of the Default config group rather than a separate one, and the 3rd JN will continue to operate in a separate config group till you use /data/jn directory as its edits directory. Just in case you need to change the JN directory on any JN refer teh steps here - https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_mc_jn.html
... View more
02-01-2021
10:58 AM
1 Kudo
Hi @pauljoshiva Though it is expected to have uniform disk configiuration across datanodes in cluster, you can have two different sets of disk confgiration on DNs. You can have one parition of 2TB size on each disk (3 *2TB =6TB on each DN) even though existing has 1 TB size of partition on each disk across all 9 disks (9*1TB=9TB on each DN). There will be no issue running DNs with such configiration, but you may see 6TB size DNs are filling faster than 9TB size DNs due to the fact that, NN doesn't consider availabel free space on DNs before writing blocks into it. NameNode picks the DN randomly after evaluating network distnace of the DN from client. Hope this helps. Thank you
... View more
02-01-2021
10:35 AM
Hello @vvk Please note, while adding/removing the journal nodes from the running cluster, you need to ensure a quorum of journal nodes available for NameNodes. (As cited in the shared document--> NameNode high availability requires that you must maintain at least three, active JournalNodes in your cluster.) It means NameNode ensures at least a quorum of Journal Nodes (2 of 3 journal nodes) available for edits log write at any given point Failing to write edits into a quorum of journal nodes, NameNode is expected to crash (shutdown itself). I believe this could be the scenario in your case. So you need to add new journal nodes first to the cluster before removing the old Journal nodes one by one ensuring a quorum of journal nodes available in the cluster. If you see NN crashed even after edits log write was successful on a quorum of JNs, then we need to check the NN log for any other issues. Thank you
... View more
11-11-2020
02:06 AM
Hello @Amn_468 Since you reported the DN Pause time, I spoke/referred about DN heap only. The block counts on most of the DN seems >6Millions, hence would suggest to increase the DN heap to 8GB (from current value of 6GB) and perorm a rolling restart to bring the new heap size into effect. There is no straight forward way to say you hit the small file problem but if your average block size is few MB or less than a MB in size, it is an indication that you are storing/accumulating small files in HDFS. Simplest way to determine small files in cluster is to run fsck. Fsck should show the average block size. If it's too low a value (eg ~ 1MB ), you might be hitting the problems of small files which would be worth looking at, otherwise, there is no need to review the number of blocks. [..] $ hdfs fsck / .. ... Total blocks (validated): 2899 (avg. block size 11475601 B) <<<<< [..] You may refer belwo links for your help on dealing with small files. - https://blog.cloudera.com/small-files-big-foils-addressing-the-associated-metadata-and-application-challenges/ - https://community.cloudera.com/t5/Community-Articles/Identify-where-most-of-the-small-file-are-located-in-a-large/ta-p/247253
... View more