Member since
06-02-2020
331
Posts
67
Kudos Received
49
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2798 | 07-11-2024 01:55 AM | |
| 7859 | 07-09-2024 11:18 PM | |
| 6569 | 07-09-2024 04:26 AM | |
| 5903 | 07-09-2024 03:38 AM | |
| 5598 | 06-05-2024 02:03 AM |
03-30-2023
04:25 AM
Hi @ShobhitSingh You need to adjust the csv file sample.csv ========= COL1|COL2|COL3|COL4
1st Data|2nd|3rd data|4th data
1st Data|2nd \\P data|3rd data|4th data
"1st Data"|"2nd '\\P' data"|"3rd data"|"4th data"
"1st Data"|"2nd '\\\\P' data"|"3rd data"|"4th data" Spark Code: spark.read.format("csv").option("header","true").option("inferSchema","true").option("delimiter","|").load("/tmp/sample.csv").show(false) Output: +--------+--------------+----------+--------+
|COL1 |COL2 |COL3 |COL4 |
+--------+--------------+----------+--------+
|1st Data|2nd |3rd data |4th data|
|1st Data|2nd \\P data |3rd data |4th data|
|1st Data|2nd '\P' data |3rd data |4th data|
|1st Data|2nd '\\P' data|3rd data |4th data|
+--------+--------------+----------+--------+
... View more
03-30-2023
02:51 AM
Hi @Albap Based on the logs, i can see you have created streaming application. By default streaming application will run 24*7, it will stop only when we kill or some interrupted event happen at the system level. Better way to kill/shutdown the spark streaming applications is by using graceful shutdown. If you need further help, please raise an cloudera case we will work on.
... View more
03-30-2023
02:40 AM
Hi @dmharshit With the below log, difficult to provide a solution because it don't have full logs. Please create a cloudera case to check the logs and provide a solution. ERROR : FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session c-47f2-aceb-22390502b303 Error: Error while compiling statement: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session d6d96da5-f2bc-47f2-aceb-22390502b303 (state=42000,code=30041)
... View more
02-27-2023
02:03 AM
Spark Rolling event log files
1. Introduction
While running a long-running spark application (for example streaming application), the spark will generate a larger/huge single event log file until the Spark application is killed or stopped. Maintaining a single event log file which may cost a lot to maintain and also requires a bunch of resources to replay per each update in the Spark History Server.
To avoid creating. a single huge event log file, the spark team created a rolling event log file.
2. Enabling the Spark Rolling Event logs in CDP
Step1: Enable the rolling event logs and set the max file size
CM -->Spark 3 --> Configuration --> Spark 3 Client Advanced Configuration Snippet (Safety Valve) for spark3-conf/spark-defaults.conf.
spark.eventLog.rolling.enabled=true
spark.eventLog.rolling.maxFileSize=128m
The default spark.eventLog.rolling.maxFileSize value will be 128MB. The minimum value is 10MB.
Step2: Set the rolling event log max files to retain
CM -->Spark 3 --> Configuration --> History Server Advanced Configuration Snippet (Safety Valve) for spark3-conf/spark-history-server.conf
spark.history.fs.eventLog.rolling.maxFilesToRetain=2
By default, spark.history.fs.eventLog.rolling.maxFilesToRetain value will be infinity meaning all event log files are retained. The minimum value is 1.
3. Verify the output
Verify the output from the Spark history server event log directory.
[root@c3543-node4 ~]# sudo -u spark hdfs dfs -ls -R /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002
-rw-rw---- 3 spark spark 0 2023-01-04 07:03 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/appstatus_application_1672813574470_0002.inprogress
-rw-rw---- 3 spark spark 10485458 2023-01-04 07:05 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_1_application_1672813574470_0002
-rw-rw---- 3 spark spark 0 2023-01-04 07:05 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_2_application_1672813574470_0002
[root@c3543-node4 ~]# sudo -u spark hdfs dfs -ls -R /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002
-rw-rw---- 3 spark spark 0 2023-01-04 07:03 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/appstatus_application_1672813574470_0002.inprogress
-rw-rw---- 3 spark spark 492014 2023-01-04 07:06 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_1_application_1672813574470_0002.compact
-rw-rw---- 3 spark spark 10489509 2023-01-04 07:06 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_2_application_1672813574470_0002
-rw-rw---- 3 spark spark 227068 2023-01-04 07:06 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_3_application_1672813574470_0002
[root@c3543-node4 ~]# sudo -u spark hdfs dfs -ls -R /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002
-rw-rw---- 3 spark spark 0 2023-01-04 07:03 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/appstatus_application_1672813574470_0002.inprogress
-rw-rw---- 3 spark spark 873356 2023-01-04 07:06 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_2_application_1672813574470_0002.compact
-rw-rw---- 3 spark spark 10484816 2023-01-04 07:06 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_3_application_1672813574470_0002
-rw-rw---- 3 spark spark 339165 2023-01-04 07:06 /user/spark/spark3ApplicationHistory/eventlog_v2_application_1672813574470_0002/events_4_application_1672813574470_0002
References:
SPARK-28594
Applying compaction on rolling event log files
... View more
02-03-2023
09:10 PM
After following above steps I'm still not able to start hiveserver2
... View more
01-18-2023
01:07 AM
Hi @Nikhil44 First of all, Cloudera will not support Standalone Spark installation. To access any hive table, we need a hive-site.xml and Hadoop-related configuration files like (core-site.xml, hdfs-site.xml and yarn-site.xml)
... View more
01-02-2023
12:05 AM
@Samie, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
12-08-2022
08:30 PM
Hi @quangbilly79 You have used CDP hbase-spark-1.0.0.7.2.15.0-147.jar instead of CDH. There is no guarantee it will work latest jar in CDH. Luckily for you it is worked.
... View more
11-07-2022
02:09 AM
Hi @PNCJeff I would recommend installing and using Livy Server in the CDP cluster. For Livy Kerberos configuration parameters are below: livy.server.launch.kerberos.keytab=<LIVY_SERVER_PATH>/livy.keytab
livy.server.launch.kerberos.principal=livy/server@DOMAIN.COM
livy.server.auth.type=kerberos
livy.server.auth.kerberos.keytab=<LIVY_SERVER_PATH>/livy.keytab
livy.server.auth.kerberos.principal=HTTP/server@DOMAIN.COM
livy.server.auth.kerberos.name-rules=RULE:[2:$1@$0](rangeradmin@DOMAIN.COM)s/(.*)@DOMAIN.COM/ranger/\u000ARULE:[2:$1@$0](rangertagsync@DOMAIN.COM)s/(.*)@DOMAIN.COM/rangertagsync/\u000ARULE:[2:$1@$0](rangerusersync@DOMAIN.COM)s/(.*)@DOMAIN.COM/rangerusersync/\u000ARULE:[2:$1@$0](rangerkms@DOMAIN.COM)s/(.*)@DOMAIN.COM/keyadmin/\u000ARULE:[2:$1@$0](atlas@DOMAIN.COM)s/(.*)@DOMAIN.COM/atlas/\u000ADEFAULT\u000A
... View more
10-18-2022
11:37 PM
Hello @vaishaakb , Sadly we did not reach the solution for the main issue yet. Yes, I checked this blog and I also checked every documentation provided by cloudera or others to try resolving this issue but no luck. Also I want to point out that the blog's first demo is not working properly and the cloudera team posted something that shows an error ImportError: No module named numpy Which proves that the docker image didn't work with pyspark properly.
... View more