Member since
07-30-2020
219
Posts
45
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
435 | 11-20-2024 11:11 PM | |
489 | 09-26-2024 05:30 AM | |
1084 | 10-26-2023 08:08 AM | |
1852 | 09-13-2023 06:56 AM | |
2129 | 08-25-2023 06:04 AM |
09-15-2022
12:57 AM
Hi @rki_ , Thanks for the explanation. I had hoped to have found a reason for the index writer closed error of SolR. Thanks you anyway
... View more
08-28-2022
10:07 PM
@fsm17, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
08-25-2022
08:44 AM
Hi @KCJeffro , The best possible way would be to change the log level for the thread 'org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy' to 'ERROR'. Do the following: Navigate to Cloudera Manager > HDFS > Config > search for 'NameNode Logging Advanced Configuration Snippet (Safety Valve) log4j_safety_valve' Add the following property: log4j.logger.org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy=ERROR
... View more
08-22-2022
11:41 AM
@BORDIN Has the reply above helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more
08-20-2022
02:23 PM
first thank you so much , for your help , I see in the post the following example: [{"ConfigGroup":{"id":2,"cluster_name":"c1","group_name":"A config group","tag":"HDFS","description":"A config group","hosts":[{"host_name":"host1"}],"service_config_version_note":"change","desired_configs":[{"type":"hdfs-site","tag":"version1443587493807","properties":{"dfs.replication":"2","dfs.datanode.du.reserved":"1073741822"}}]}}] I will appreciate , to get full example about how to run this API , by using curl or full Ambari API note - about - version1443587493807 , is this version number is "random" number that I need to set ?
... View more
08-16-2022
10:00 AM
@Ben1978 You can refer the below docs for spark 3,3.1,3.2 respectively https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/cds-3/topics/spark-spark-3-requirements.html https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/cds-3/topics/spark-spark-3-requirements.html https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/cds-3/topics/spark-3-requirements.html As CDP 7.1.8 is not release yet, we don't have an official doc on that. -- Was your question answered? Please take some time to click on “Accept as Solution” below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-16-2022
06:34 AM
Hi @noekmc , Are you referring to the below "Restart service" highlighted option that you see under ldap_url? If yes, it is expected and you can refer my earlier comment.
... View more
08-10-2022
07:15 AM
Hi @yagoaparecidoti , Yes, as you are using Ambari to manage the cluster, you can add the property as follows : Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site -> Add Property dfs.client.block.write.replace-datanode-on-failure.policy=ALWAYS
... View more
08-03-2022
12:02 AM
Hi @Krisssh 1. '/hbase' is a hbase.root directory 2. The steps should work for HDP3 to CDP, I haven't checked for HDP2 to CDP. There are few known issues when migrating phoenix tables from HDP2 to CDP and it should follow a path HDP2=> HDP3 => CDP. For Hbase tables, as per this it has worked. -- Was your question answered? Please take some time to click on "Accept as Solution" below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more