Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Need Spark Thrift Server Design because STS hang after started about 2 hours

avatar
Explorer

Hi all,

I am running Spark Thrift Server on Yarn, client mode with 50 executor nodes. First I setup -Xmx=25g for driver, the STS run about 30 mins then hang. After that I increase -Xmx=40G for driver, the STS run about 1 hour then hang. I increase -Xmx=56G for driver, STS run about 2 hours then hang again. I could not keep increasing JVM heap. In all cases, I didn't see any out of memory exception in log file. It seems that when I increased JVM heap on driver STS took most of them. I dumped JVM heap and I saw SparkSession objects are biggest object (one of them is about 10G, others are about 4-6G). I don't understand why SparkSession objects are too large like that. Please:

1) Is there any suggestion to help me resolve my issue?

2) Is there any where I can research more about the way STS works. I need a document like https://cwiki.apache.org/confluence/display/Hive/Design#Design-HiveArchitecture to understand how STS process query from client because it seems that driver memory keep increasing when there are more people connect and query to STS

3) How can I sizing memory that need to configure proper for my driver in STS

Thank you very much,

14 REPLIES 14

avatar
Super Collaborator

Hi,

Is your thrift server crashing saying no JVM heap?

This may be related to STS daemon itself instead of drivers and executors.

Please try increasing daemon memory in spark-env.sh (This isn't the memory for driver/executor, it's for spark daemons- history server and STS). It is 1 GB by default. Increase this to 4-6.

#Memory for Master, Worker and history server (default: 1024MB)

export SPARK_DAEMON_MEMORY=6000m

Thank You

avatar
Explorer

@tsharma: thank you very much for your response. Base on your suggestion, I googled and see this parameter seems to be setup for Spark Standalone Mode (https://spark.apache.org/docs/2.0.2/spark-standalone.html). My application is running on Yarn. Should I configure this parameter?

Thanks,

avatar
Super Collaborator

Yes, this takes effect on cluster mode too and dictates the memory for Spark History Server and STS daemons. Are you using HDP? If yes you should be able to set it via Ambari, else set it directly in spark-env.sh. Please do try this.

avatar
Super Collaborator

@anobi do

Spark Thrift Server is just a gateway to submit applications to Spark, so standard Spark configurations are applicable directly.

Please see below links. I found them very useful.

https://developer.ibm.com/hadoop/2016/08/22/how-to-run-queries-on-spark-sql-using-jdbc-via-thrift-se...

https://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-1/

https://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/

Main Properties -> https://spark.apache.org/docs/latest/configuration.html

Also STS honors this configuration file -> /etc/spark2/conf/spark-thrift-sparkconf.conf. So set your spark.executor.memory, spark.driver.memory, spark.executor.cores, spark.executor.instances there.

Thank You

avatar
Explorer

@tsharma: thank you. I will try to configure my system follow your suggestion

avatar
Explorer

Hi all and @tsharma,

I didn't see OMM exception in STS log file. However when I added "-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy -XX:+PrintTenuringDistribution", I saw this message in gc log file "G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap already fully expanded" (please see detail message below). It seems the memory is not enough, but when I increased -Xmx, STS only work a little more time and hang again. Back to my previous questions:

1) What is kept in driver memory? Why it is too large (48G) and more if I increase -Xmx? As @tsharma said STS only a gateway. I am using client mode (not cluster mode)

2) How can I sizing memory that need to configure proper for my driver in STS?

3) I need a document like https://cwiki.apache.org/confluence/display/Hive/Design#Design-HiveArchitecture to understand how STS process query from client because it seems that driver memory keep increasing when there are more people connect and query to STS

Thank you,

My gc log:

2017-11-16T08:32:23.876+0700: 46776.282: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 167772160 bytes, new threshold 15 (max 15)
46776.282: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 0, predicted base time: 13.63 ms, remaining time: 186.37 ms, target pause time: 200.00 ms]
46776.282: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 0 regions, survivors: 0 regions, predicted young region time: 0.00 ms]
46776.282: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 0 regions, survivors: 0 regions, old: 0 regions, predicted pause time: 13.63 ms, target pause time: 200.00 ms]
46776.289: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 97.41 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)]
, 0.0069342 secs]
[Parallel Time: 3.6 ms, GC Workers: 33]
[GC Worker Start (ms): Min: 46776282.1, Avg: 46776282.4, Max: 46776282.7, Diff: 0.6]
[Ext Root Scanning (ms): Min: 1.6, Avg: 2.0, Max: 3.2, Diff: 1.7, Sum: 64.4]
[SATB Filtering (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[Processed Buffers: Min: 0, Avg: 0.0, Max: 1, Diff: 1, Sum: 1]
[Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Object Copy (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 1.3]
[Termination (ms): Min: 0.0, Avg: 1.0, Max: 1.1, Diff: 1.1, Sum: 34.4]
[GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.4]
[GC Worker Total (ms): Min: 2.7, Avg: 3.0, Max: 3.3, Diff: 0.6, Sum: 100.5]
[GC Worker End (ms): Min: 46776285.4, Avg: 46776285.4, Max: 46776285.4, Diff: 0.1]
[Code Root Fixup: 0.7 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.6 ms]
[Other: 2.0 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 1.1 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.6 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 0.0B(2432.0M)->0.0B(2432.0M) Survivors: 0.0B->0.0B Heap: 47.5G(48.0G)->47.5G(48.0G)]
[Times: user=0.10 sys=0.00, real=0.01 secs]
46776.290: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: allocation request failed, allocation request: 32 bytes]
46776.290: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 33554432 bytes, attempted expansion amount: 33554432 bytes]
46776.290: [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap already fully expanded]
2017-11-16T08:32:23.884+0700: 46776.290: [Full GC (Allocation Failure)

avatar
Super Collaborator

@anobi do

For spark driver memory see this link -> https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-driver.html

Also when you do a collect or take, the result comes to driver, your driver will throw error if the result of collect or take is more than free space. Hence it's kept large to account for that if you have big datasets. However default is set to 1G or 2G because it mainly schedules tasks working with YARN with operations being performed on executors themselves (which actually have data, can cache it and process it).

When you increase sessions, STS daemon memory shall increase too because it has to keep listening and handling sessions.

My thrift server process was started like this:

hive 27597 13 Nov15 ?00:49:53 /usr/lib/jvm/java-1.8.0/bin/java -Dhdp.version=2.6.1.0-129 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx6000m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal


Note the -Xmx here corresponds to thrift daemon memory rather than driver memory, driver memory is taken from spark2-thriftserver/conf/spark-thrift-sparkconf.conf which internally has a symbolic link to one inside /etc.

If you don't override it there it would just pick default. So please have spark.executor.memory, spark.driver.memory defined there.

Can you get in your node, do ps -eaf | grep thrift and paste output here?

I had asked you to set SPARK_DAEMON_MEMORY=6000m ?

Are you using HDP/Ambari?

If yes, please set it directly here as shown:

screen-shot-2017-11-16-at-104601-am.png

And set thrift-server parameters here:

screen-shot-2017-11-16-at-104834-am.png

Just for example.

If you're not using HDP/Ambari,

Set SPARK_DAEMON_MEMORY in spark-env.sh and thrift parameters in /etc/spark2/conf/spark-thrift-sparkconf.conf and start thrift-sever.

spark.driver.cores 1

spark.driver.memory 40G

spark.executor.cores 1

spark.executor.instances 13

spark.executor.memory 40G

Or you can also give thrift parameters dynamically as mentioned in the IBM link I sent.

You can cross-check your configuration in Environment Tab when you open your application in Spark History Server.

Even I couldn't find a document explaining thrift-server in detail.

Please confirm that you've done above and cross-check environment in Spark UI.

avatar
Super Collaborator

You may also be facing a bug. Check below links and your spark version.

https://issues.apache.org/jira/browse/SPARK-18857

https://forums.databricks.com/questions/344/how-does-the-jdbc-odbc-thrift-server-stream-query.html

https://stackoverflow.com/questions/35046692/spark-incremental-collect-to-a-partition-causes-outofme...

Regardless, please try with spark.sql.thriftServer.incrementalCollect true in thrift conf or start thrift-server with that.

It is set to False by default, this would be an important thing to check and has a direct implication on driver heap (if you're in fact running out of that). Read link below:

http://www.russellspitzer.com/2017/05/19/Spark-Sql-Thriftserver/

avatar
Explorer

Thank you very much for your response @tsharma. I do not use HDP for my STS. I will follow your suggestion. I am wondering how did you calculate memory need for your cluster? Do you have any guideline plz. As you can see in my above log message, I already set memory to 48G but it seems take all my memory, if I increase it, it take all memory again ([Eden: 0.0B(2432.0M)->0.0B(2432.0M) Survivors: 0.0B->0.0B Heap: 47.5G(48.0G)->47.5G(48.0G)])

Thanks,