Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark2 Thrift Server not start on ambari cluster

avatar

in our ambari cluster system ( version 26 ) , we have 3 master node while Spark2 Thrift Server installed on master01 and master03

when we start both or one of then spark thrift server , its start for ashort time - 30 sec and then fail back

we can see from the ambari-agent log the following details about the spark thrift

what could be the problem

47407-capture.png

INFO 2018-01-01 22:14:51,827 RecoveryManager.py:255 - SPARK2_THRIFTSERVER needs recovery, desired = STARTED, and current = INSTALLED.
INFO 2018-01-01 22:15:02,732 RecoveryManager.py:255 - SPARK2_THRIFTSERVER needs recovery, desired = STARTED, and current = INSTALLED.
INFO 2018-01-01 22:15:06,054 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_THRIFTSERVER of service SPARK2 of cluster hdp to the queue.
INFO 2018-01-01 22:15:06,153 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_CLIENT of service SPARK2 of cluster hdp to the queue.
INFO 2018-01-01 22:15:06,501 RecoveryManager.py:255 - SPARK2_THRIFTSERVER needs recovery, desired = STARTED, and current = INSTALLED.
INFO 2018-01-01 22:15:13,347 RecoveryManager.py:255 - SPARK2_THRIFTSERVER needs recovery, desired = STARTED, and current = INSTALLED.
INFO 2018-01-01 22:15:23,356 RecoveryManager.py:255 - SPARK2_THRIFTSERVER needs recovery, desired = STARTED, and current = INSTALLED.
<br>
Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Super Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
8 REPLIES 8

avatar
Master Guru

anything else showing in Ambari? is spark2 running? any other issues?

avatar

I update my quastion , see the pic with details , second only thrift is the problem all other services are up

Michael-Bronson

avatar
Super Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar

what is the variable that represented the "max applications allowed in Yarn" ?

Michael-Bronson

avatar
Super Guru
@Michael Bronson

Go to Yarn -> Configs -> Advanced -> Scheduler -> Capacity Scheduler . Check 'yarn.scheduler.capacity.maximum-applications'.

Default should be 10000.

avatar

in my cluster the - yarn.scheduler.capacity.maximum-applications=10000 , can we change it as decease/increase it to see if it will help?

Michael-Bronson

avatar

another thing you say "here may be resource crunch on Yarn" , is it possible to validate it ? ( I mean how we can be sure about resource problem ) or verifcation

Michael-Bronson

avatar
Explorer

This can happen if spark1 and spark2 are both running on same node.Try to kill the process. Then delete the service and add it to a separate node.It must work.