Member since
03-01-2016
609
Posts
12
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1601 | 02-20-2024 10:42 PM | |
1949 | 10-26-2023 05:40 PM | |
1266 | 06-13-2023 07:55 PM | |
2053 | 04-28-2019 12:21 AM | |
1376 | 04-28-2019 12:12 AM |
02-20-2024
10:42 PM
1 Kudo
Hi @yanseoi , what you are encountering is the same issue discussed in flink community: https://lists.apache.org/thread/07d46txb6vttw7c8oyr6z4n676vgqh28 it is due to : https://issues.apache.org/jira/browse/FLINK-29978 https://issues.apache.org/jira/browse/FLINK-29977 And fixed with a kafka client version upgrade in: https://issues.apache.org/jira/browse/FLINK-31599 the same change should have been included in CSA 1.12.0, please try: https://docs.cloudera.com/csa/1.12.0/installation/topics/csa-upgrade-artifacts.html
... View more
10-26-2023
05:40 PM
Officially, CDP 7.1.7 ( GA, SP1, SP2) supports CDS 3.2 (Apache spark 3.2.3 based) : https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/cds-3/topics/spark-3-requirements.html - CDP 7.1.8 supports CDS 3.3 (apache spark 3.3.0 based) https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/cds-3/topics/spark-3-requirements.html - CDP 7.1.9 supports CDS 3.3 (apache spark 3.3.2 based) https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/cds-3/topics/spark-3-requirements.html There's basically a 1:1 mapping between CDS spark3 version support with a CDP 7.1.{7,8,9} version.
... View more
06-13-2023
07:55 PM
Hi There's similar feature in CDP as the time based scheduling rules in CDH, called "Dynamic Queue Scheduling". But it is Tech Preview in CDP 717 and GA in CDP 718, please refer to below docs: https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/yarn-allocate-resources/topics/yarn-dynamic-queue-scheduling.html https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/yarn-allocate-resources/topics/yarn-dynamic-queue-scheduling.html Hope this helps.
... View more
05-13-2019
04:29 AM
To be more acturate, technically, CVE-2018-1334 is fixed in CDH 5.14.4. But there's a new issue been found with similar privilege escalation vulnerability, which is CVE-2018-11760. We fixed CVE-2018-11760 in CDH 5.15.1. So with CDH 5.15.1, you won't be affected by these two similar privilege escalation vulnerabilities.
... View more
04-28-2019
12:21 AM
You may can achive the goal with YARN node label feature. See the detailed explaination on the following Hortonworks post: https://community.hortonworks.com/articles/72450/node-labels-configuration-on-yarn.html Current Cloudera CDH distribution does not officially support Node Label, we are working on release a uniform version of CDH + HDP. We will have a new Cloudera Data Platform (CDP) release later this year. If you are our subscruption customer, please feel free to contact cloudera support to enqury the state of this feature, also the state of the new CDP release. Thanks.
... View more
04-28-2019
12:12 AM
1 Kudo
You can use the Spark Action in Oozie to submit any spark applications: https://archive.cloudera.com/cdh5/cdh/5/oozie/DG_SparkActionExtension.html#Spark_Action If you are more familar with spark-submit tool, you can try to use oozie shell action as well: https://archive.cloudera.com/cdh5/cdh/5/oozie/DG_ShellActionExtension.html You may need to make sure the spark gateway role is deployed on the oozie server and node manager nodes, so that the runtime env always have the depencies available.
... View more
04-22-2019
02:13 AM
The error message shows you don't have a valid leader for the partition you are accessing. In kafka, all read/writes should go through the leader of that partition. You should make sure the topic/partitions have healthy leader first, run: kafka-topics --describe --zookeeper <zk_url, put /chroot if you have any> --topic <topic_name>
... View more
11-11-2018
10:05 PM
You may need to increase: yarn.nodemanager.resource.memory-mb yarn.scheduler.maximum-allocation-mb the default one could be too small to launch a default spark executor container ( 1024MB + 512 overhead). You may also want to enable INFO logging for the spark shell to understand what exact error/warn it has: /etc/spark/conf/log4j.properties
... View more
10-17-2018
11:45 AM
3 Kudos
Hi Continuum ships Anaconda parcel and Cloudera does not have control on which python version it installs. Please use the OS package management tool to install python 3.5 on the servers in the CDH cluster, once that is done, please follow this doc to set python for your pyspark job: https://www.cloudera.com/documentation/enterprise/5-8-x/topics/spark_python.html#spark_python__section_ark_lkn_25
... View more