Member since
11-12-2018
189
Posts
177
Kudos Received
32
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
592 | 04-26-2024 02:20 AM | |
754 | 04-18-2024 12:35 PM | |
3410 | 08-05-2022 10:44 PM | |
3138 | 07-30-2022 04:37 PM | |
6827 | 07-29-2022 07:50 PM |
04-26-2024
02:20 AM
1 Kudo
Flume, Storm, Druid, Falcon, Mahout, Ambari, Pig, Sentry, and Navigator have changed or been removed in CDP with replaced components . For Storm can be replaced with Cloudera Streaming Analytics (CSA) powered by Apache Flink. Contact your Cloudera account team for more information about moving from Storm to CSA. You can refer comparing Storm and Flink also Migrating from Storm to Flink.
... View more
04-22-2024
12:46 PM
@Alaaeldin Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
04-24-2023
07:25 AM
Sorry for late response, I use oozie to submit a spark job
... View more
12-02-2022
09:44 AM
@sss123 Are you able to run spark commands via spark-shell spark-submit?
... View more
12-01-2022
02:25 AM
Hello @QiDam As stated by JD above, the CDE Service relies on Ozone Service on Base Cluster. If Ozone Service wasn't in Healthy State, the CDE Service enabling would fail with similar tracing as shared by you. We would recommend the following checks: Ensure Ozone Service is Up & Running on Base Cluster. Create a new Environment & check CDE Service enabling on the new Environment. If the above CDE Service enabling on new Environment is Successful, Reattempt the CDE Service enabling on the existing Environment. If the above Suggestion doesn't help, We would suggest engaging Support as any further troubleshooting would require sharing the Logs over the Public Community forum, which may have Customer's details. We shall mark the Post as Resolved now. If you have any concerns, Feel free to Update the Post likewise. Regards, Smarak
... View more
11-01-2022
12:01 AM
Hi @Siddu198 Add this config to your job: set("mapreduce.fileoutputcommitter.algorithm.version","2")
... View more
09-26-2022
06:57 AM
Hello @jagadeesan , @rki_ parameters you mentioned do not appear in Ambari. Does that mean our clusters are running with the default settings, exposing the clusters to the vulnerability ? Please, could you provide the way to set this parameters (which custom settings for Spark 1 and Spark 2 as well as the keys and values). Thanks in advance.
... View more
08-31-2022
09:15 PM
Hi @nvelraj Pyspark job working locally because in your local system pandas library is installed, so it is working. When you run in cluster, pandas library/module is not available so you are getting the following error. ModuleNotFoundError: No module named 'pandas' To solve the. issue, you need to install the pandal library/module in all machines or use Virtual environment.
... View more
08-12-2022
04:49 AM
To solve "unable to find valid certification path to requested target" I just import the certificate to java and restart the Zeppeling Server. ### LINUX LIST CERT cd /usr/lib/jvm/java-11-openjdk-11.0.15.0.10-2.el8_6.x86_64/bin ./keytool -list -keystore /usr/lib/jvm/java-11-openjdk-11.0.15.0.10-2.el8_6.x86_64/lib/security/cacerts ### LINUX IMPORT CERT ./keytool --import --alias keystore_cloudera --file /var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts.pem -keystore /usr/lib/jvm/java-11-openjdk-11.0.15.0.10-2.el8_6.x86_64/lib/security/cacerts
... View more
08-02-2022
06:34 PM
@Asim- JDBC also you need HWC for Managed tables. Here is the example for Spark2, but as mentioned earlier Spark3 we don't have any other way to connect Hive ACID tables from Apache Spark other than HWC and it is not yet a supported feature for Spark3.2 / CDS 3.2 in CDP 7.1.7. Marking this thread close, if you have any issues related to external tables kindly start a new Support-Questions thread for better tracking of the issue and documentation. Thanks
... View more