Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark 2 interpreter runs only 3 containers

avatar
Expert Contributor

Hi. I have a problem with Spark 2 interpreter in Zeppelin. I configured interpreter like this:

20470-zeppelin-1.png

When I run query like this:

%spark2.sql
select var1, count(*) as counter
from database.table_1
group by var1
order by counter desc

Spark job runs only 3 containers and job takes 13 minutes.

20471-zeppelin-2.png

Does anyone know why Spark interpreter takes only 4.9 % of queue? How I should configure the interpreter to increase this factor?

1 ACCEPTED SOLUTION

avatar
Guru

@Mateusz Grabowski, You should enable Dynamic Resource Allocation in Spark to automatically increase/decrease executors of an app as per resource availability.

You can choose to enable DRA in either Spark or Zeppelin .

1) Enable DRA for Spark2 as below.

https://community.hortonworks.com/content/supportkb/49510/how-to-enable-dynamic-resource-allocation-...

2) Enable DRA via Livy Interpreter. Run all spark notebooks via livy interpreters.

https://zeppelin.apache.org/docs/0.6.1/interpreter/livy.html

View solution in original post

2 REPLIES 2

avatar
Guru

@Mateusz Grabowski, You should enable Dynamic Resource Allocation in Spark to automatically increase/decrease executors of an app as per resource availability.

You can choose to enable DRA in either Spark or Zeppelin .

1) Enable DRA for Spark2 as below.

https://community.hortonworks.com/content/supportkb/49510/how-to-enable-dynamic-resource-allocation-...

2) Enable DRA via Livy Interpreter. Run all spark notebooks via livy interpreters.

https://zeppelin.apache.org/docs/0.6.1/interpreter/livy.html

avatar
Expert Contributor

It works! Thank you 🙂