Member since
07-30-2020
219
Posts
46
Kudos Received
60
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4868 | 11-20-2024 11:11 PM | |
| 2920 | 09-26-2024 05:30 AM | |
| 2447 | 10-26-2023 08:08 AM | |
| 4189 | 09-13-2023 06:56 AM | |
| 4491 | 08-25-2023 06:04 AM |
10-10-2022
09:27 PM
Hello @cprakash Since we haven't heard from your Team, We are marking the Post as Resolved. Feel free to add your Team's observation whenever feasible. In Summary, Review the HMaster Logs to confirm the reasoning for ConnectionRefused. Few possible scenarios being Port 16000 is being used by any other Service Or, "master1" isn't correctly being mapped as per DNS Or, Port 16000 may be blocked. Regards, Smarak
... View more
09-25-2022
10:01 PM
@abdebja, have any of replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
09-19-2022
02:10 AM
Hi @Anlarin , It is always suggested to have a homogeneous disk storage across Datanodes. Within datanode, if there are heterogeneous volumes, then when the block replicas are written to new disks on a Round Robin fashion, the disks with less capacity will fill up faster compared to the disks with higher size. If the client is local to Node 2, then it will place the 1st block on that node and it's expected to fill faster. By choosing "Available Space Policy" the DNs would take into account how much space is available on each volume/disks when deciding where to place a new replica. To achieve writes that are evenly distribution in percentage of capacity on drives, change the choosing policy (dfs.datanode.fsdataset.volume.choosing.policy)to Available Space. If using Cloudera Manager: Navigate to HDFS > Configuration > DataNode Change DataNode Volume Choosing Policy from Round Robin to Available Space Click Save Changes Restart the DataNodes The above property only helps for volumes within Datanode. https://docs.cloudera.com/documentation/enterprise/latest/topics/admin_dn_storage_balancing.html - Was your question answered? Please take some time to click on “Accept as Solution” below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-15-2022
12:57 AM
Hi @rki_ , Thanks for the explanation. I had hoped to have found a reason for the index writer closed error of SolR. Thanks you anyway
... View more
08-25-2022
08:44 AM
Hi @KCJeffro , The best possible way would be to change the log level for the thread 'org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy' to 'ERROR'. Do the following: Navigate to Cloudera Manager > HDFS > Config > search for 'NameNode Logging Advanced Configuration Snippet (Safety Valve) log4j_safety_valve' Add the following property: log4j.logger.org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy=ERROR
... View more
08-22-2022
11:41 AM
@BORDIN Has the reply above helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more
08-16-2022
10:00 AM
@Ben1978 You can refer the below docs for spark 3,3.1,3.2 respectively https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/cds-3/topics/spark-spark-3-requirements.html https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/cds-3/topics/spark-spark-3-requirements.html https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/cds-3/topics/spark-3-requirements.html As CDP 7.1.8 is not release yet, we don't have an official doc on that. -- Was your question answered? Please take some time to click on “Accept as Solution” below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-16-2022
06:34 AM
Hi @noekmc , Are you referring to the below "Restart service" highlighted option that you see under ldap_url? If yes, it is expected and you can refer my earlier comment.
... View more
08-10-2022
07:15 AM
Hi @yagoaparecidoti , Yes, as you are using Ambari to manage the cluster, you can add the property as follows : Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site -> Add Property dfs.client.block.write.replace-datanode-on-failure.policy=ALWAYS
... View more