Member since
06-13-2021
245
Posts
7
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
170 | 09-21-2022 09:46 AM | |
183 | 07-18-2022 10:50 AM | |
586 | 07-18-2022 10:40 AM | |
318 | 06-15-2022 02:10 AM | |
366 | 03-24-2022 04:54 AM |
09-21-2022
09:46 AM
Hi @marcocharlie@marcocharlie I can see you or your team member created the support, we will mark this as solved. And the issue is resolved by Cloudera support.
... View more
08-25-2022
01:44 AM
@hebamahmoud If your issue has been resolved, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
07-21-2022
08:07 AM
Hi @marcocharlie Greetings, Yes, it is possible to increase the size, but I request please raise a support request because we need to share some internal doc's with you after reviewing the cluster. Please do let us know once you created the support case. Thanks, Shehbaz.
... View more
07-18-2022
10:50 AM
Hi @federicoferruti Thanks for reaching us for this query, yes please proceed with the first link. Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
07-18-2022
10:40 AM
This issue has been resolved by the support case, we applied the below solution: 1- Please check the /tmp directory if "/tmp/hadoop-solron" is present or not on the Data Lake master node
2- If not please create "/tmp/hadoop-solron"
3- It should be owned by user/group solr:solr and have 755 permissions
4- Also, ensure there is enough available space in /tmp on the DL master.
... View more
06-24-2022
03:56 AM
@pandav Thanks for the update, I would request you to please file a support case ticket, We need to check multiple aspects for the backup.
... View more
06-23-2022
06:53 PM
Hi @snm1523 Please open a support case with us to resolve this issue. We may need help from the account team. Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-23-2022
06:48 PM
@Christ Thanks for your feedback on the documentation, I will highlight this to our document team. About your query, we need to open a support ticket because it requires the involvement of our product team. Please let us know if you have any other queries. Thanks,
... View more
06-21-2022
08:32 AM
No, the cdpcli does not store any file in a temp location. it is the data lake backup that stores the temp file on the master node, you can try the following command: cdp datalake backup-datalake --datalake-name dl-bakup --backup-name test-backup --skip-ranger-hms-metadata --skip-atlas-metadata --skip-ranger-audits --backup-location s3a://bucket-name/backup-archive Please refer to the below official doc for data lake backup: Configuring and running Data Lake backups Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
06-21-2022
08:07 AM
Hi @Christ If your issue has been resolved, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-21-2022
07:59 AM
@pandav Welcome to Cloudera Community, I can understand you are facing issues while taking the Data lake backup. Could you please provide the output of the following command: # cdp datalake list-datalake-backups --datalake-name dl-name # df -kha what is your CDP runtime version, because we have seen this in the past whenever we tried to take Data lake back up, it will backup ranger-HMS-metadata and ranger-audit. The temp file will be written to the master node before moving to S3. The metadata is too big that fills up the root file system on the master node. You can contact Cloudera support also on this. Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
06-15-2022
03:49 AM
Hello @grlzz, We already have a feature related to this, please find the below feature improvement id: 1- DSE-13091 You can contact the account team if you would like to get updates on the JIRA. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
06-15-2022
03:48 AM
hi @pky If your issue has been resolved, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. thanks
... View more
06-15-2022
03:42 AM
If your issue has been resolved, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-15-2022
02:21 AM
Hi @Christ, Thanks for being a part of the Cloudera community, CDP Public Cloud Data Warehouse CDP Private Cloud Data Warehouse You can leverage the above options provided by Cloudera for the Public and Private clouds. Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-15-2022
02:10 AM
hi @hebamahmoud , Could you please refer to the below article for your queries, just go through it and let us know if it works for you. Auto-TLS in Cloudera Data Platform Data Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
05-11-2022
01:28 AM
Hello Lakshmi - The role you are mapping to the user in the IDBroker mapping section has the right S3 bucket specified? Or are you using the same bucket created during the DataLake deployment?
- Can you also make sure that Spark is configured to point to the S3 bucket [1]. For Spark, it is required to define the S3 bucket name in the following property "spark.yarn.access.hadoopFileSystems".
Example: If using a DataHub cluster, Access to the DH in the Management Console > CM-UI > Clusters > Spark > Configurations > Create a file names "spark-defaults.conf" or update the existing file with the property:
spark.yarn.access.hadoopFileSystems=s3a://bucket_name or DL--Manage Access-- IDBroker Mappings -- edit -- It was given the Data Access Role. DH --Manage Access-- Assigned your self required roles
... View more
05-11-2022
01:21 AM
To get the number of vcores and memory used for a particular queue in the cluster
You can get this information in the cluster utilization report, please go through the below doc for more information:
https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/admin_cluster_util_report.html
Also, if you want to see the resources used for each application, please go through the below KB article:
https://my.cloudera.com/knowledge/How-to-calculate-memory-and-v-core-utilization-for-each-of-the?id=... another option is Using yarn application -status command, you can get the Aggregate Resource Allocation for an application. For e.g. yarn application -status application_ID (for one of completed applications), one of the rows returned is: Aggregate Resource Allocation : 46641 MB-seconds, 37 vcore-seconds This gives an aggregate memory and CPU allocations in seconds. https://my.cloudera.com/knowledge/How-to-calculate-memory-and-v-core-utilization-for-each-of-the?id=...
... View more
03-24-2022
04:59 AM
Hello @shadma-1 Just wanted to check if you have any further queries related to the replication manager. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button 🙂
... View more
03-24-2022
04:54 AM
Hello @grlzz, We already have a feature related to this, please find the below feature improvement id: 1- DSE-13091 You can contact the account team if you would like to get updates on the JIRA. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-24-2022
01:33 AM
Hello @Juanes , Could you please check the ksck report ksck report from kudu, Please if you have any unhealthy tables also verify the replicas as well. Please refer doc[1] doc[1]: https://kudu.apache.org/docs/administration.html#tablet_majority_down_recovery
Thanks,
... View more
03-23-2022
04:45 AM
1 Kudo
Hello @dutras Thanks for opening it, We will jump on the call.
... View more
03-22-2022
01:29 AM
1 Kudo
@dutras Please refer to the KB[1] to get the CRN and please raise a support case. Support will help you with this 🙂 KB[1]: https://community.cloudera.com/t5/Community-Articles/What-is-CRN-and-how-do-I-find-it/ta-p/331851
... View more
03-17-2022
12:13 PM
Hello @shadma-1 We have one experience in CDP to achieve this named as Replication Manager, with the help of it you can migrate your HDFS/HIVE/HBase data to S3 or Azure. Please refer to the below official link for your reference, also if have any difficulties you can reach out to the support team as well. 1- Cloudera Replication Manager 2- Introduction to Replication Manager Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-17-2022
12:05 PM
Hello @drgenious Please reach out to certification@cloudera.com. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-17-2022
12:03 PM
Hello @dutras below would be the possible cause for your issue You need to sync the Data lake cluster with the cloud provided by running the following command: cdp datahub sync-cluster --cluster-name <value> - Communication issue. - Cloud Provider issue. - Transient error Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-17-2022
11:54 AM
Hello @Koffi The balancer will do the job for you, please refer to the below Official docs before configuring it. 1- Overview of the HDFS Balancer 2- Configuring the Balancer Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-16-2022
08:59 PM
1 Kudo
1. With same license can I have different clusters with different cloudera manager. (eg: We are asking for 15 node license but 3 cluster of 5 nodes. There won't be any connectivity between these environments so we need separate cloudera manager for each environments with the same license). Is it possible ? Yes - it is possible 2. In the production servers we are not having internet connectivity is it mandatory to have connectivity to cloudera servers for the license activation.? Yes you can 3. If I use the license in our R&D environment to test above cases after testing can I use same license for production clusters or it will say the license already used ? - Yes it is possible but it should deployed under same account Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-15-2022
06:56 AM
Hello @rahuledavalath You don't need to have the three different licenses but I would suggest you to go through the below link https://www.cloudera.com/products/pricing.html You can get the required info also you can contact our sales time the information present in the link itself Thanks,
... View more