Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark job getting failed with Jupyter notebook

avatar
Explorer

I built Spark2 with CDH 5.16 and able to submit scala jobs with no issues. Now I am able to launch pyspark2 and when I am trying to run simple job, its throwing the below error? can you please suggest on this? what is the alternate to submit python jobs on spark jobs apart from Jupyter notebook? Pls advise

 

[I 23:08:33.864 NotebookApp] Adapting to protocol v5.1 for kernel f8d7200b-6718-49f6-86e9-c051fb6d84a6

[Stage 0:>                                                          (0 + 0) / 2]Exception in thread "dispatcher-event-loop-0" java.lang.OutOfMemoryError: Java heap space

at java.util.Arrays.copyOf(Arrays.java:3236)

at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)

at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)

at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)

at org.apache.spark.util.ByteBufferOutputStream.write(ByteBufferOutputStream.scala:41)

at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1853)

 

Thanks

CS

 

19/08/06 23:10:41 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

[Stage 0:>                                                          (0 + 0) / 2]19/08/06 23:10:47 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Requesting driver to remove executor 2 for reason Container marked as failed: container_1565048178604_0033_01_000003 on host: ukvmlx-rdk-22.rms.com. Exit status: 1. Diagnostics: Exception from container-launch.

1 ACCEPTED SOLUTION

avatar

Hi, the probable root cause is that the spark job submitted by the Jupyter notebook has a different memory config parameters. So I dont think the issue is Jupyter, but rather the executor and driver memory settings. Yarn is not able to provide enough resources (i.e. memory)

 

19/08/06 23:10:41 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

 

Check your cluster settings:

 - how much memory YARN has allocated in NodeManagers, how big the container could be

 - what are the submit options of your spark job

 

View solution in original post

3 REPLIES 3

avatar

Hi, the probable root cause is that the spark job submitted by the Jupyter notebook has a different memory config parameters. So I dont think the issue is Jupyter, but rather the executor and driver memory settings. Yarn is not able to provide enough resources (i.e. memory)

 

19/08/06 23:10:41 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

 

Check your cluster settings:

 - how much memory YARN has allocated in NodeManagers, how big the container could be

 - what are the submit options of your spark job

 

avatar
Rising Star

Hi Chittu,

Your issue here is that your JVM process is running out of memory, specifically heap space:

java.lang.OutOfMemoryError: Java heap space

Judging from the output you shared, I believe this is your driver that's running out of memory and so you would need to increase the maximum heap size for the driver. That's done by configuring the spark.driver.memory parameter or by passing the --driver-memory flag to the Spark command being used. 

avatar
Cloudera Employee

Hi,

 

As mentioned in the previous posts, did you tried increasing the memory and whether it solved the issue? 

Please let us know if you are still facing any issues?

 

Thanks

AKR