Member since
05-14-2020
109
Posts
11
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
857 | 01-31-2023 06:37 AM | |
701 | 01-31-2023 06:15 AM | |
996 | 01-31-2023 06:03 AM | |
621 | 12-22-2022 05:33 PM | |
1010 | 08-26-2022 12:23 AM |
04-12-2023
01:48 AM
@rajilion It seems that you are using the -update flag with distcp command, which is causing the command to skip files that exist in the destination and have a modification time equal to or newer than the source file. This is the expected behavior of distcp when the -update flag is used. In your case, even though the content of the file has changed, the size and modification time are still the same, which is causing distcp to skip the file during the copy process. To copy the updated file to S3, you can try removing the -update flag from the distcp command. This will force distcp to copy all files from the source directory to the destination, regardless of whether they exist in the destination or not. Your updated command would look like this: hadoop distcp -pu -delete hdfs_path s3a://bucket The -pu flag is used to preserve the user and group ownership of the files during the copy process. Please note that removing the -update flag can cause distcp to copy all files from the source directory to the destination, even if they haven't been modified. This can be time-consuming and may result in unnecessary data transfer costs if you have a large number of files to copy. If you only want to copy specific files that have been modified, you can use a different tool such as s3-dist-cp or aws s3 sync that supports checksum-based incremental copies. These tools use checksums to determine which files have been modified and need to be copied, rather than relying on modification times or file sizes. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
02-02-2023
03:08 AM
Hi @arunek95 I went through the docs. Wasnt able to find minimum RAM for the nodes. If I were to create to create 3 VMs for nodes, will 4GB be enough? Also, will 8GB be bare sufficient for the main Cloudera manager node? Asking this, as I will be using VMware workstation 15.5 on Windows 10 Pro. Need to get the memory correct for the cluster machines. Thanking you. Warm Regards, Plascio
... View more
01-31-2023
05:21 PM
Hi@arunek95,yes. But you gave me the idea. After I deleted the alert_current and alert_history tables data, the problem was solved.I don't know why, you know? And thank you very much PREVIEW
... View more
01-31-2023
06:37 AM
2 Kudos
Hello @dhughes20 , I can see when we navigate to a Hive table in Atlas UI we have an audit tab. I tried to update the table by adding a new column. After altering the table I can see the audit is showing actions for "Entity update" and when we expand the latest entry we were able to see details like last modified time , last modified by and more details under "Parameter". But I am not able to find any audit log which shows what changes were made.
... View more
01-31-2023
06:15 AM
1 Kudo
Hi @sathishCloudera , We have ready log4j patches for CDH 6.1.1 where log4j version is 2.17.1. Please reach out to Cloudera support with a technical case to deliver the patch.
... View more
01-24-2023
03:15 AM
Dear @drewski7 please use this api to get the result. curl -X POST "https://dev-cm.sever.com:7183/api/v43/clusters/cluster-name/services/service/commands/restart" -H "accept: application/json" -k -u admin:pssword replace api v43 version with ur version and cluster name. if you dont have tls change 7183 with 7180 and remove -k option
... View more
12-25-2022
11:33 AM
I solved problem, reason was not in filter
... View more
12-22-2022
06:00 PM
Hi @drgenious , We have this doc which explains the steps to migrate Oozie WF's from CDH to CDP , please have a look on it. https://docs.cloudera.com/cdp-one/saas/cdp-one-data-migration/topics/cdp-saas-oozie-migration-workflows-in-cdh.html If you still have more queries please reach out to our support through Cloudera portal.
... View more
12-22-2022
05:33 PM
Hi @KPG1 , As per the description I believe you are looking for a way to have a high availability for HMS , if that is the case please find the below doc which can help - https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/hive-hms-ha-configuration.html And for the backend DB to configure it for HA , it will be out of scope of Cloudera and you have to work with DB team itself to understand how they can configure.
... View more