<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Hive on Spark CDH 5.7 - Failed to create spark client in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61454#M70080</link>
    <description>&lt;P&gt;The error was&amp;nbsp;a configuration issue. We need to either lower the executor memory (spark.executor.memory) and executor memory overhead (spark.yarn.executor.memoryOverhead) or increase the maximum memory allocation (yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We can refer this link&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;A href="http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/" target="_blank"&gt;http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We tried changing all the combinations and the following&amp;nbsp;properties&amp;nbsp;gave the best result in our cluster:&lt;/P&gt;&lt;P&gt;set hive.execution.engine=spark;&lt;BR /&gt;set spark.executor.memory=4g;&lt;BR /&gt;set yarn.nodemanager.resource.memory-mb=12288;&lt;BR /&gt;set yarn.scheduler.maximum-allocation-mb=2048;&lt;/P&gt;</description>
    <pubDate>Thu, 02 Nov 2017 11:55:00 GMT</pubDate>
    <dc:creator>TamilP</dc:creator>
    <dc:date>2017-11-02T11:55:00Z</dc:date>
    <item>
      <title>Hive on Spark CDH 5.7 - Failed to create spark client</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61175#M70078</link>
      <description>&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;Hi All,&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;We are getting&amp;nbsp;the error while executing the hive queries with spark engine.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;The following properties are set to use spark as the execution engine instead of mapreduce:&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set hive.execution.engine=spark;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set spark.executor.memory=2g;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;I tried changing the following properties &amp;nbsp;also.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set yarn.scheduler.maximum-allocation-mb=2048;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set yarn.nodemanager.resource.memory-mb=2048;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set spark.executor.cores=4&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set spark.executor.memory=4g;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set spark.yarn.executor.memoryOverhead=750&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000" face="Calibri"&gt;set hive.spark.client.server.connect.timeout=900000ms;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 12:26:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61175#M70078</guid>
      <dc:creator>TamilP</dc:creator>
      <dc:date>2022-09-16T12:26:00Z</dc:date>
    </item>
    <item>
      <title>Re: Hive on Spark CDH 5.7 - Failed to create spark client</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61259#M70079</link>
      <description>You will need to check both HS2 log and Spark application log to get the real error message.&lt;BR /&gt;&lt;BR /&gt;"Failed to create spark client" is too generic and it can be anything.</description>
      <pubDate>Thu, 26 Oct 2017 10:57:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61259#M70079</guid>
      <dc:creator>EricL</dc:creator>
      <dc:date>2017-10-26T10:57:18Z</dc:date>
    </item>
    <item>
      <title>Re: Hive on Spark CDH 5.7 - Failed to create spark client</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61454#M70080</link>
      <description>&lt;P&gt;The error was&amp;nbsp;a configuration issue. We need to either lower the executor memory (spark.executor.memory) and executor memory overhead (spark.yarn.executor.memoryOverhead) or increase the maximum memory allocation (yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We can refer this link&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;A href="http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/" target="_blank"&gt;http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We tried changing all the combinations and the following&amp;nbsp;properties&amp;nbsp;gave the best result in our cluster:&lt;/P&gt;&lt;P&gt;set hive.execution.engine=spark;&lt;BR /&gt;set spark.executor.memory=4g;&lt;BR /&gt;set yarn.nodemanager.resource.memory-mb=12288;&lt;BR /&gt;set yarn.scheduler.maximum-allocation-mb=2048;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Nov 2017 11:55:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61454#M70080</guid>
      <dc:creator>TamilP</dc:creator>
      <dc:date>2017-11-02T11:55:00Z</dc:date>
    </item>
    <item>
      <title>Re: Hive on Spark CDH 5.7 - Failed to create spark client</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61978#M70081</link>
      <description>may i ask do you setup the params just in your beeline sql script? do you need to change the configuration xml for hs2?</description>
      <pubDate>Sun, 19 Nov 2017 05:02:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/61978#M70081</guid>
      <dc:creator>VictorMa</dc:creator>
      <dc:date>2017-11-19T05:02:58Z</dc:date>
    </item>
    <item>
      <title>Re: Hive on Spark CDH 5.7 - Failed to create spark client</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/62000#M70082</link>
      <description>&lt;P&gt;Yes. Just in the hql file. Not anything in XML file&lt;/P&gt;</description>
      <pubDate>Mon, 20 Nov 2017 11:35:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-on-Spark-CDH-5-7-Failed-to-create-spark-client/m-p/62000#M70082</guid>
      <dc:creator>TamilP</dc:creator>
      <dc:date>2017-11-20T11:35:21Z</dc:date>
    </item>
  </channel>
</rss>

