Member since
06-02-2020
331
Posts
64
Kudos Received
49
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
997 | 07-11-2024 01:55 AM | |
2815 | 07-09-2024 11:18 PM | |
2474 | 07-09-2024 04:26 AM | |
1865 | 07-09-2024 03:38 AM | |
2144 | 06-05-2024 02:03 AM |
02-21-2022
02:12 AM
Hi @Rajeshhadoop Please find the few of the references: https://spark.apache.org/docs/2.4.0/sql-migration-guide.html https://blog.knoldus.com/migration-from-spark-1-x-to-spark-2-x/ https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_spark_post.html Note: We don't have document for upgrading straight from spark 1.x to spark 2.4
... View more
02-14-2022
07:48 PM
1 Kudo
Hi @Rekasri Due to code related issue, the above exception is occurred. Please check your code while creating/closing the SparkSession object. Note: If already found the answer please share the issue code and fixed code, it will helpful for others.
... View more
02-08-2022
04:29 AM
Hi @kanikach I think we don't have mechanism to tell what are all the changes is happen in current release vs previous release other than approaching to the engineering team. If you want more details changes better you can raise a cloudera case and we will check with eng team and get back to you.
... View more
02-08-2022
04:12 AM
Hi @AmineCHERIFI I am suspecting due to timezone it is causing the issue. To check further, Could you please share sample data what you have created and table structure. We will try to reproduce internally? Note: Have you tried the do the same logic with out HWC. Please test and share the results as well. For reading/writing externals tables HWC is not required.
... View more
02-08-2022
04:07 AM
1 Kudo
Hi @victorescosta You need to check the producer code at which format kafka message is produced and what kind of Serializer class you have used. Same format/serialiser you need to use while deserialising the data. For example while writing data if you have used Avro then while deserialising you need to Avro. @araujo You are right. Customer needs to check their producer code and serializer class.
... View more
02-08-2022
04:01 AM
Hi @loridigia If cluster/application is not enabled dynamic allocation and if you set --conf spark.executor.instances=1 then it will launch only 1 executor. Apart from executor, you will see AM/driver in the Executor tab Spark UI.
... View more
12-07-2021
10:29 PM
1 Kudo
In this article, we will learn how to integrate Zeppelin JDBC (Phoenix) interpreter example.
1. Configuring the JDBC (Phoenix) interpreter: Login to Zeppelin UI -> Click on the user name (in my case, admin) at the right-hand corner. It will display a menu > click on Interpreter.
Click on + Create at the right-hand side of the screen.
It will display a popup menu. Enter Interpreter Name as jdbc and select Interpreter Group as jdbc. Then, it will populate Properties in table format.
Click on + button and add the Phoenix-related properties according to your cluster, and click on the Save button.
phoenix.driver
org.apache.phoenix.jdbc.PhoenixDriver
phoenix.url
jdbc:phoenix:localhost:2181:/hbase
phoenix.user
phoenix.password
2. Creating the Notebook:
Click Notebook dropdown menu in the top left-hand corner and select Create new note and enter Note Name as Phoenix_Test,and select Default Interpreter as jdbc. Finally, click on Create button.
3. Running the Phoenix queries using jdbc (Phoenix) interpreter in Notebook:
%jdbc(phoenix)
CREATE TABLE IF NOT EXISTS Employee (
id INTEGER PRIMARY KEY,
name VARCHAR(225),
salary FLOAT
)
%jdbc(phoenix)
UPSERT INTO Employee VALUES(1, 'Ranga Reddy', 24000)
%jdbc(phoenix)
UPSERT INTO Employee (id, name, salary) VALUES(2, 'Nishantha', 10000)
%jdbc(phoenix)
select * from Employee
4. Final Results:
Happy Learning.
... View more
10-31-2021
10:24 PM
Hi By default /opt/cloudera/cm-agent/service/hive/hive.sh file, TEZ_JARS property will be TEZ_JARS="$PARCELS_ROOT/CDH/jars/tez-*:$PARCELS_ROOT/CDH/lib/tez/*.jar:$CONF_DIR/tez-conf" We need to update TEZ_JARS property like below: TEZ_JARS="/opt/cloudera/parcels/CDH/jars/tez-:/opt/cloudera/parcels/CDH/lib/tez/.jar:$CONF_DIR/tez-conf" After that we need to Restart the service.
... View more
10-28-2021
12:05 AM
Hi @Marwn Please check the application logs to identify why application startup is taking X mins. Without providing application logs very difficult to provide.
... View more
10-27-2021
11:10 PM
Hi @EBH Spark application is failed with OOM error. To understand why OOM error, we need to go through Spark event logs, application logs and spark submit command. Currently you are not shared any of the logs. Next step is, try to increase the executor/driver memory and set memory overhead value as 0.1 or 0.2 % of driver/executor memory. If still issue is not resolved please raise an cloudera case we will work on this issue.
... View more