Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

spark thrift server not started

avatar

we have ambari cluster with 3 worker machines ( each worker have 8G memory )

when we start the spark thrift server on master01/03 machines we get the following errors

Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
18/02/05 18:12:52 WARN Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
18/02/05 18:12:53 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (10240+1024 MB) is above the max threshold (6144 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocat
ion-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

please advice - what these errors means ?

about - Required executor memory (10240+1024 MB) , what this memory values and how to set the parameters in spark or other in order to solve these isshue ?

Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Expert Contributor

Hi, @Michael Bronson

`spark.executor.memory` seems to be 10240.

Please change it in your Ambari, `spark-thrift-conf`.

View solution in original post

3 REPLIES 3

avatar
Expert Contributor

Hi, @Michael Bronson

`spark.executor.memory` seems to be 10240.

Please change it in your Ambari, `spark-thrift-conf`.

avatar

thank you , could you please explain about this variable and who is responsible to set it ? ( I mean this value set by the cluster itself? ) , on the first thinking it seems that workers not have the right memory so I increase the workers memory to 32G insted 8G

Michael-Bronson

avatar
Expert Contributor

It's a memory size for Spark executor (worker). And, there is additional overhead in Spark executor. You need to set a proper value by yourself. Of course, in YARN environment, the memory (+ overhead) should be smaller than the limitation of YARN container. So, Spark shows you the error message.

It's an application property. For normal Spark jobs, users are responsible because each app can set their `spark.executor.memory` with `spark-submit`. For Spark Thrift Server, admins should manage that properly when they adjust YARN configuration.

For more information, please see this. http://spark.apache.org/docs/latest/configuration.html#application-properties