Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

​How to configure dynamic allocation on HDInsights 3.6 for Spark job running from Jupiter Notebook?

Highlighted

​How to configure dynamic allocation on HDInsights 3.6 for Spark job running from Jupiter Notebook?

New Contributor

Hi, I am running HDInsights 3.6 on Azure using Jupiter Notebook.

I configured Spark dynamic allocation as a default Spark parameters throw Ambari following the post.

When I start a new Spark session throw Jupiter Notebook without any Spark configuration related to a number of executors or memory consumption I see Spark job running (owned by Livy user) using 3 Containers (3 vCores).

After starting the read operation to read multiple files from Azure blob storage I still see only 3 Containers running.

My question is How to configure dynamic allocation on HDInsights 3.6 for Spark job running from Jupiter Notebook?

Thanks

Don't have an account?
Coming from Hortonworks? Activate your account here