Member since
03-11-2020
186
Posts
28
Kudos Received
40
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
449 | 11-07-2024 08:47 AM | |
309 | 11-07-2024 08:36 AM | |
424 | 06-18-2024 01:34 AM | |
230 | 06-18-2024 01:25 AM | |
498 | 06-18-2024 01:16 AM |
07-31-2023
11:33 PM
@h2rr821 Please perform the below steps. Identify compatible version Backup of the existing Cloudera PostgreSQL JDBC driver files Download the new PostgreSQL JDBC driver files. Stop Cloudera services using the PostgreSQL JDBC driver Replace old JAR files with the new ones Ensure new JAR files have correct permissions. Start services: Restart Cloudera services. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
07-12-2023
11:48 PM
@George-Megre There could be possibilities you are hitting the below issue. https://issues.apache.org/jira/browse/AMBARI-20068
... View more
05-31-2023
02:36 AM
@noekmc Change the keystore password: Use the following command to change the keystore password: keytool -storepasswd -keystore /path/to/keystore.jks
... View more
04-27-2023
09:40 PM
@kobolock If you are seeing the same below error then you can you can try the following solution. SSLError: Failed to connect. Please check openssl library versions. To resolve this issue, add the following property in ambari-agent.ini (/etc/ambari-agent/conf/ambari-agent.ini) file under [security]and restart ambari-agent:
========
[security]
force_https_protocol=PROTOCOL_TLSv1_2 If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
04-27-2023
09:28 PM
@harry_12 Could please confirm the below things? >> If that user is a local user (Means its created through ambari UI )? If yes that user will not be able to login to your OS. >> Where does this user belong to Is it coming from AD or Ldap? >> Where you are trying to ssh? What server is that ? >> Are you doing curl on Ambari URL and trying to access with that user? If thats the case, then it should be able to curl. >> Please attach some screenshot of this event to understand it better.
... View more
04-17-2023
11:23 PM
Hello @itsmezeeshan, Hope you are doing good. Its not recommended to clear the space from this location. As these the binaries for the Cloudera Manager packages for the following services (cloudera-host-monitor, cloudera-scm-agent, cloudera-scm-eventserver, cloudera-scm-headlamp, cloudera-scm-server, cloudera-scm-server-db, cloudera-service-monitor) [root@mprl509 lib]# du -sh cloudera-* 1.3G cloudera-host-monitor 300K cloudera-scm-agent 20M cloudera-scm-eventserver 4.6M cloudera-scm-headlamp 3.0G cloudera-scm-server 272M cloudera-scm-server-db 5.5G cloudera-service-monitor I would like to recommend you to please increase the disk space to 20 GB More.
... View more
04-12-2023
02:07 AM
@aleezeh Letus know if you also getting this error unable to find valid certification path to requested target. If yes there could be chances that kafka truststore does not have ranger admin certificate you can import the ranger admin certificate to kafka truststore. keytool -importcert -file /tmp/ranger.cer -keystore kafka_plugin_truststore.jks If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
04-12-2023
01:59 AM
@aleezeh Can you please attach the log file to further investigate on this? What version of HDP or CDP you are using?
... View more
04-12-2023
01:48 AM
@rajilion It seems that you are using the -update flag with distcp command, which is causing the command to skip files that exist in the destination and have a modification time equal to or newer than the source file. This is the expected behavior of distcp when the -update flag is used. In your case, even though the content of the file has changed, the size and modification time are still the same, which is causing distcp to skip the file during the copy process. To copy the updated file to S3, you can try removing the -update flag from the distcp command. This will force distcp to copy all files from the source directory to the destination, regardless of whether they exist in the destination or not. Your updated command would look like this: hadoop distcp -pu -delete hdfs_path s3a://bucket The -pu flag is used to preserve the user and group ownership of the files during the copy process. Please note that removing the -update flag can cause distcp to copy all files from the source directory to the destination, even if they haven't been modified. This can be time-consuming and may result in unnecessary data transfer costs if you have a large number of files to copy. If you only want to copy specific files that have been modified, you can use a different tool such as s3-dist-cp or aws s3 sync that supports checksum-based incremental copies. These tools use checksums to determine which files have been modified and need to be copied, rather than relying on modification times or file sizes. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
04-12-2023
01:44 AM
You should be able to access cluster using below link. https://www.cloudera.com/campaign/try-cdp-public-cloud.html#:~:text=Try%20CDP%20Public%20Cloud%20for,hybrid%20and%20multi%2Dcloud%20data If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more