Member since
03-06-2018
201
Posts
1
Kudos Received
0
Solutions
09-21-2022
11:38 PM
Hello @Boron I believe you are using HDP 3.x. Note that there is no Spark 1.x available in HDP 3. We need to use Spark 2.x. Set the SPARK_HOME to Spark 2. export SPARK_HOME=/usr/hdp/current/spark2-client
... View more
08-19-2022
05:51 AM
hi @Deepan_N by running the command below directly in python3: r0.headers["www-authenticate"] returns the following error: Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> r0.headers["www-authenticate"] Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'r0' is not defined >>> below is the screenshot of the commands executed in bash:
... View more
02-08-2022
04:01 AM
Hi @loridigia If cluster/application is not enabled dynamic allocation and if you set --conf spark.executor.instances=1 then it will launch only 1 executor. Apart from executor, you will see AM/driver in the Executor tab Spark UI.
... View more
02-02-2022
07:52 AM
Hello @loridigia I don't think there is a direct way to achieve this. But we have a workaround to do that. We can start the Spark jobs with Dynamic Allocation enabled. And we can set the Minimum executors to "0", initial executors to "1" and the idle timeout to "5s". With these configurations, the Spark job will start with 1 executor and after 5 seconds that container will be killed as it will be idle for more than 5 seconds. Now, we will have a Spark application only with the Driver / ApplicationMaster container running. CONFIGS: --conf spark.dynamicAllocation.enabled=true
--conf spark.shuffle.service.enabled=true
--conf spark.dynamicAllocation.executorIdleTimeout=5s
--conf spark.dynamicAllocation.initialExecutors=1
--conf spark.dynamicAllocation.maxExecutors=1
--conf spark.dynamicAllocation.minExecutors=1 NOTE: We can add these configs to the spark-defaults.conf so that the changes will be applied to all the Running jobs. Please be careful with other / actual Spark job configurations. Make sure to mark the answer as the accepted solution. If it resolves your issue !
... View more
01-06-2022
05:51 AM
This appears to be the cause. Switching to MS Edge avoided the HSTS feature in Chrome. Will look at enabling HTTPS on all UIs though since that is the correct solution in the longterm (spark.ssl.historyServer.enabled option is currently set to false).
... View more