Created 07-19-2016 07:04 AM
Hi,
I have downloaded HDP 2.4 sandbox for evaluation of new memory management. Below are the environment details
- Windows Desktop (64 bit) with 16 GB memory
- Oracle VM player
- HDP 2.4 (Sandbox)
I started the VM and checked the executors for Spark Thrift server (port 4040). The value for Storage Memory is 511.5 MB. That means total Java heap size is ~1GB.
Kindly let me know how can I increase the Java Heap size and setting for other dependent service/configuration.
Thanks,
Yogesh
Created 07-19-2016 07:08 AM
increase spark.executor.memory to the desired and then adjust spark.memory.storageFraction value which is default .5 (with this default value .9*.5*executor memory will be used as storage memory)
Created 07-19-2016 07:08 AM
increase spark.executor.memory to the desired and then adjust spark.memory.storageFraction value which is default .5 (with this default value .9*.5*executor memory will be used as storage memory)
Created 07-19-2016 07:12 AM
Thanks Rajkumar for quick reply.
I have increased value for spark.executor.memory but not sure how to change value for spark.memory.storageFraction. Please suggest the location to find spark.memory.storageFraction.
Thanks,
Yoges
Created 07-19-2016 07:30 AM
you can update this in conf/spark-env.sh file and also overwrite it with your job options like this
--conf spark.storage.memoryFraction=0.4
Created 07-19-2016 07:14 AM
Example from the Spark doc page (http://spark.apache.org/docs/latest/submitting-applications.html)
# Run on a Spark standalone cluster in cluster deploy mode with supervise ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master spark://207.184.161.138:7077 \ --deploy-mode cluster \ --supervise \ --executor-memory 20G \ --total-executor-cores 100 \ /path/to/examples.jar \ 1000
executor-memory is what you want to adapt