Reply
Explorer
Posts: 19
Registered: ‎12-18-2014
Accepted Solution

(5.8.2-1.cdh5.8.2.p0.3) Hive on Spark "Completed" Job Stuck (YARN)

Cloudera Manger 5.8.2

CDH 5.8.2-1.cdh5.8.2.p0.3

 

Hive on Spark Completed Job got stuck on YARN.

 

After the hive query finished and got the result, the spark process is still running on YARN (even though when we see the the spark process in ApplicationManager it's already in COMPLETED status).

Temp solution, we have to kill the job manually or else we can't submit another spark job because YARN things the resources is still being used.

 

What I did is to downgrade back my cluster to CDH 5.8.0

Cloudera Employee
Posts: 34
Registered: ‎08-16-2016

Re: (5.8.2-1.cdh5.8.2.p0.3) Hive on Spark "Completed" Job Stuck (YARN)

Is this issue specific to CDH5.9? What is he behavior on CDH5.8.

AFAIK, this is the expected behavior. The HoS application on yarn keeps running even after the query result is running. This application on yarn is treated as a container to run future queries. Starting a new container is an expensive operation. Having them warmed up speeds up the execution of future queries. You should observe that the next queries are noticeably faster.

Please provide us additional info on the behavior in CDH5.8, so we can further assist you. Thanks
Cloudera Employee
Posts: 34
Registered: ‎08-16-2016

Re: (5.8.2-1.cdh5.8.2.p0.3) Hive on Spark "Completed" Job Stuck (YARN)

Could you please post an update so we can determine if there is a regression or not? Thanks
Explorer
Posts: 19
Registered: ‎12-18-2014

Re: (5.8.2-1.cdh5.8.2.p0.3) Hive on Spark "Completed" Job Stuck (YARN)

I already downgraded my cluster to CDH 5.8.0 and I tried of what you said on the previous.

Yes you are correct, event though the job is completed, when I did HoS query and didn't quit the hive shell, it's still use YARN resources.

Thanks for pointing out the matter.
Highlighted
Cloudera Employee
Posts: 34
Registered: ‎08-16-2016

Re: (5.8.2-1.cdh5.8.2.p0.3) Hive on Spark "Completed" Job Stuck (YARN)

Awesome !! Thanks for the update.