Support Questions
Find answers, ask questions, and share your expertise

​How to configure dynamic allocation on HDInsights 3.6 for Spark job running from Jupiter Notebook?

​How to configure dynamic allocation on HDInsights 3.6 for Spark job running from Jupiter Notebook?

New Contributor

Hi, I am running HDInsights 3.6 on Azure using Jupiter Notebook.

I configured Spark dynamic allocation as a default Spark parameters throw Ambari following the post.

When I start a new Spark session throw Jupiter Notebook without any Spark configuration related to a number of executors or memory consumption I see Spark job running (owned by Livy user) using 3 Containers (3 vCores).

After starting the read operation to read multiple files from Azure blob storage I still see only 3 Containers running.

My question is How to configure dynamic allocation on HDInsights 3.6 for Spark job running from Jupiter Notebook?

Thanks