<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table(&amp;quot;default.table1&amp;quot;).show is failing on java.io.IOException: java.lang.NullPointerException in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240074#M201880</link>
    <description>&lt;P&gt;Hi Tarek,&lt;/P&gt;&lt;P&gt;when I set Hive interactive correctly (tuning of resources is &lt;STRONG&gt;the most critical part&lt;/STRONG&gt; otherwise reading was failing) all was running fine and smoothly. &lt;/P&gt;&lt;P&gt;In the end I built whole pipeline completely on Spark only as Hive Interactive was not needed anymore and for a large streaming or heavy batches was unstable - to many connections, some were already closed etc. I'm talking about volumes like 1,5 billions with foreachBatch  sink. At this moment I can do a stream reading and compacting at the same time.&lt;/P&gt;</description>
    <pubDate>Mon, 13 May 2019 14:24:48 GMT</pubDate>
    <dc:creator>xaerocom</dc:creator>
    <dc:date>2019-05-13T14:24:48Z</dc:date>
    <item>
      <title>HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240071#M201877</link>
      <description>&lt;P&gt;On the fresh new cluster based on HDP 3.1.0 (Kerberized) I'm still facing to a problem with Spark and Hive reading. Connection via HWC is not working. When a try run &lt;EM&gt;hive.table("default.table1").show&lt;/EM&gt; I'll get an error message:&lt;/P&gt;&lt;PRE&gt;java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
  at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.createBatchDataReaderFactories(HiveWarehouseDataSourceReader.java:166)
  at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDD$lzycompute(DataSourceV2ScanExec.scala:64)
  at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDD(DataSourceV2ScanExec.scala:60)
  at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDDs(DataSourceV2ScanExec.scala:79)
  at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
  at org.apache.spark.sql.execution.SparkPlan$anonfun$execute$1.apply(SparkPlan.scala:131)
  at org.apache.spark.sql.execution.SparkPlan$anonfun$execute$1.apply(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.SparkPlan$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
  at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:337)
  at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
  at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$collectFromPlan(Dataset.scala:3278)
  at org.apache.spark.sql.Dataset$anonfun$head$1.apply(Dataset.scala:2489)
  at org.apache.spark.sql.Dataset$anonfun$head$1.apply(Dataset.scala:2489)
  at org.apache.spark.sql.Dataset$anonfun$52.apply(Dataset.scala:3259)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
  at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
  ... 47 elided
Caused by: java.lang.RuntimeException: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
  at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.getSplitsFactories(HiveWarehouseDataSourceReader.java:182)
  at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.createBatchDataReaderFactories(HiveWarehouseDataSourceReader.java:162)
  ... 72 more
Caused by: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
  at org.apache.hadoop.hive.llap.LlapBaseInputFormat.getSplits(LlapBaseInputFormat.java:298)
  at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.getSplitsFactories(HiveWarehouseDataSourceReader.java:176)
  ... 73 more
Caused by: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
  at shadehive.org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:300)
  at shadehive.org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:286)
  at shadehive.org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:379)
  at org.apache.hadoop.hive.llap.LlapBaseInputFormat.getSplits(LlapBaseInputFormat.java:280)
  ... 74 more
Caused by: org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
  at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:478)
  at org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
  at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:952)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
  at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
  at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
  at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
  at com.sun.proxy.$Proxy72.fetchResults(Unknown Source)
  at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564)
  at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:792)
  at org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
  at org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
  at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
  at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:647)
  at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
  ... 3 more
Caused by: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
  at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:162)
  at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2738)
  at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229)
  at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:473)
  ... 25 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
  at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:225)
  at org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116)
  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
  at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
  at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
  at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:519)
  at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:511)
  at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)
  ... 28 more
Caused by: java.io.IOException: java.lang.NullPointerException
  at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:498)
  at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:210)
  ... 39 more
Caused by: java.lang.NullPointerException: null
  at org.apache.hadoop.hive.llap.LlapUtil.generateClusterName(LlapUtil.java:117)
  at org.apache.hadoop.hive.llap.coordinator.LlapCoordinator.getLlapSigner(LlapCoordinator.java:103)
  at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:441)
  ... 40 more&lt;/PRE&gt;&lt;P&gt;I've checked official documentation and github page. All properties are OK, but still cannot read any data from hive. I'm using a standard hive jdbc connection, not interactive one since I'm not planning to use a LLAP engine.&lt;/P&gt;&lt;P&gt;Any idea what to set or check to avoid this error? &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;PS: I'm able to read metadata via hive connector such as database ... even writing to Hive is workrking, but not reading tables to DF.&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jan 2019 07:01:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240071#M201877</guid>
      <dc:creator>xaerocom</dc:creator>
      <dc:date>2019-01-09T07:01:18Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240072#M201878</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/103355/pavelstejskal.html"&gt;@Pavel Stejskal&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Using the HiveWarehouseConnector + Hiveserver2Interactive(LLAP for managed tables) is mandatory and the reasons are explained in the &lt;A href="https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/integrating-hive/content/hive_hivewarehouseconnector_for_handling_apache_spark_data.html"&gt; HDP3 documentation&lt;/A&gt;, if you're not using it then for sure the properties are not OK, if the namespace part of it is not configured to point to the hiveserver2Interactive znode ( I think that's what you meant), then that is not correct.&lt;/P&gt;&lt;P&gt;To read a table into a DF, you have to use HiveWarehouseSession's API, i.e:&lt;/P&gt;&lt;PRE&gt;val df = hive.executeQuery("select * from web_sales")&lt;/PRE&gt;&lt;P&gt;I'd like to suggest reading throught &lt;A href="https://community.hortonworks.com/articles/223626/integrating-apache-hive-with-apache-spark-hive-war.html"&gt;this &lt;/A&gt;entire article.&lt;/P&gt;&lt;P&gt;BR.&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jan 2019 08:17:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240072#M201878</guid>
      <dc:creator>dbompart</dc:creator>
      <dc:date>2019-01-09T08:17:11Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240073#M201879</link>
      <description>&lt;P&gt;hi &lt;A rel="user" href="https://community.hortonworks.com/users/103355/pavelstejskal.html"&gt;@Pavel Stejskal&lt;/A&gt;  &lt;A rel="user" href="https://community.cloudera.com/users/19399/dbompart.html" nodeid="19399"&gt;@dbompart&lt;/A&gt;&lt;/P&gt;&lt;P&gt;i am facing the same problem, and i am not querying a hive managed table, its just an external table in hive, i am able to read the metadata but not the data , can you please tell me how you fixed it ?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 11 May 2019 14:56:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240073#M201879</guid>
      <dc:creator>tarekabouzeid91</dc:creator>
      <dc:date>2019-05-11T14:56:09Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240074#M201880</link>
      <description>&lt;P&gt;Hi Tarek,&lt;/P&gt;&lt;P&gt;when I set Hive interactive correctly (tuning of resources is &lt;STRONG&gt;the most critical part&lt;/STRONG&gt; otherwise reading was failing) all was running fine and smoothly. &lt;/P&gt;&lt;P&gt;In the end I built whole pipeline completely on Spark only as Hive Interactive was not needed anymore and for a large streaming or heavy batches was unstable - to many connections, some were already closed etc. I'm talking about volumes like 1,5 billions with foreachBatch  sink. At this moment I can do a stream reading and compacting at the same time.&lt;/P&gt;</description>
      <pubDate>Mon, 13 May 2019 14:24:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240074#M201880</guid>
      <dc:creator>xaerocom</dc:creator>
      <dc:date>2019-05-13T14:24:48Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240075#M201881</link>
      <description>&lt;P&gt;I am also facing the same issue, I have followed documentation properly but still hitting the issue&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;count on dataframe is working fine but head is not&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;All I am doing it &lt;/P&gt;&lt;PRE&gt;val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(spark).build()
hive.executeQuery("select * from test1").head&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Here is my stack trace&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt;19/07/29 14:10:20 ERROR HiveWarehouseDataSourceReader: Unable to submit query to HS2
java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.createBatchDataReaderFactories(HiveWarehouseDataSourceReader.java:166)
&amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDD$lzycompute(DataSourceV2ScanExec.scala:64)
&amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDD(DataSourceV2ScanExec.scala:60)
&amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDDs(DataSourceV2ScanExec.scala:79)
&amp;nbsp; at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
&amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
&amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
&amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
&amp;nbsp; at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
&amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
&amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
&amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
&amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:337)
&amp;nbsp; at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
&amp;nbsp; at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3278)
&amp;nbsp; at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
&amp;nbsp; at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
&amp;nbsp; at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
&amp;nbsp; at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
&amp;nbsp; at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
&amp;nbsp; at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
&amp;nbsp; at org.apache.spark.sql.Dataset.head(Dataset.scala:2496)
&amp;nbsp; ... 49 elided
Caused by: java.lang.RuntimeException: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.getSplitsFactories(HiveWarehouseDataSourceReader.java:182)
&amp;nbsp; at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.createBatchDataReaderFactories(HiveWarehouseDataSourceReader.java:162)
&amp;nbsp; ... 70 more
Caused by: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at org.apache.hadoop.hive.llap.LlapBaseInputFormat.getSplits(LlapBaseInputFormat.java:298)
&amp;nbsp; at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.getSplitsFactories(HiveWarehouseDataSourceReader.java:176)
&amp;nbsp; ... 71 more
Caused by: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at shadehive.org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:300)
&amp;nbsp; at shadehive.org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:286)
&amp;nbsp; at shadehive.org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:379)
&amp;nbsp; at org.apache.hadoop.hive.llap.LlapBaseInputFormat.getSplits(LlapBaseInputFormat.java:280)
&amp;nbsp; ... 72 more
Caused by: org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:478)
&amp;nbsp; at org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
&amp;nbsp; at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:952)
&amp;nbsp; at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564)
&amp;nbsp; at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:792)
&amp;nbsp; at org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
&amp;nbsp; at org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
&amp;nbsp; at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
&amp;nbsp; at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)

�
&amp;nbsp; at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:647)
&amp;nbsp; at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
&amp;nbsp; at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

�
&amp;nbsp; at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
&amp;nbsp; at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:162)
&amp;nbsp; at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2738)

�
&amp;nbsp; at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229)
&amp;nbsp; at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:473)
&amp;nbsp; ... 13 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:225)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:519)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:511)
&amp;nbsp; at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)
&amp;nbsp; ... 16 more
Caused by: java.io.IOException: java.lang.NullPointerException
&amp;nbsp; at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:498)
&amp;nbsp; at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:210)
&amp;nbsp; ... 27 more
Caused by: java.lang.NullPointerException: null
&amp;nbsp; at org.apache.hadoop.hive.llap.LlapUtil.generateClusterName(LlapUtil.java:117)
&amp;nbsp; at org.apache.hadoop.hive.llap.coordinator.LlapCoordinator.getLlapSigner(LlapCoordinator.java:103)
&amp;nbsp; at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:441)
&amp;nbsp; ... 28 more
&lt;/PRE&gt;</description>
      <pubDate>Mon, 29 Jul 2019 22:07:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/240075#M201881</guid>
      <dc:creator>sbb</dc:creator>
      <dc:date>2019-07-29T22:07:25Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/270037#M207245</link>
      <description>&lt;P&gt;Could you please share how you fixed this issue&amp;nbsp; ?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Sep 2019 14:32:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/270037#M207245</guid>
      <dc:creator>Binu</dc:creator>
      <dc:date>2019-09-09T14:32:23Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/293665#M216810</link>
      <description>&lt;P&gt;the issue got fixed ,i am also acing the issue .&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;please help here to get it resolve&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2020 11:08:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/293665#M216810</guid>
      <dc:creator>Abhishek_721</dc:creator>
      <dc:date>2020-04-10T11:08:09Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/293666#M216811</link>
      <description>&lt;P&gt;can you help me to get resolve this issue&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2020 11:09:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/293666#M216811</guid>
      <dc:creator>Abhishek_721</dc:creator>
      <dc:date>2020-04-10T11:09:57Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/293674#M216815</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/76651"&gt;@Abhishek_721&lt;/a&gt;&amp;nbsp;&amp;nbsp;As this is an older post you would have a better chance of receiving a resolution by starting a new thread. This will also provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2020 13:42:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/293674#M216815</guid>
      <dc:creator>cjervis</dc:creator>
      <dc:date>2020-04-10T13:42:41Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.1 &amp; Spark 2.3.2 - hive.table("default.table1").show is failing on java.io.IOException: java.lang.NullPointerException</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/387943#M246501</link>
      <description>&lt;P&gt;Because I ran into this thread when looking how to solve this error and because we found a solution, I thought it might still serve some people if I share what solution we found.&lt;/P&gt;&lt;P&gt;We needed HWC to profile Hive managed + transactional tables from Ataccama (data quality solution). And we found someone who successfully got spark-submit working. We checked their settings and changed the spark-submit as follows:&lt;/P&gt;&lt;P&gt;COMMAND="$SPARK_HOME/bin/$SPARK_SUBMIT \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --files $MYDIR/$LOG4J_FILE_NAME $SPARK_DRIVER_JAVA_OPTS $SPARK_DRIVER_OPTS \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --jars {{ hwc_jar_path }} \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.security.credentials.hiveserver2.enabled=false \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf "spark.sql.hive.hiveserver2.jdbc.url.principal=hive/_HOST@{{ ad_realm }}" \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.dynamicAllocation.enable=false \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.hadoop.metastore.catalog.default=hive \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.yarn.maxAppAttempts=1 \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.legacy.parquet.int96RebaseModeInRead=CORRECTED \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.legacy.parquet.int96RebaseModeInWrite=CORRECTED \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.legacy.parquet.datetimeRebaseModeInRead=CORRECTED \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.legacy.timeParserPolicy=LEGACY \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.legacy.typeCoercion.datetimeToString.enabled=true \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.parquet.int96TimestampConversion=true \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.extensions=com.hortonworks.spark.sql.rule.Extensions \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.extensions=com.qubole.spark.hiveacid.HiveAcidAutoConvertExtension \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.kryo.registrator=com.qubole.spark.hiveacid.util.HiveAcidKyroRegistrator \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.sql.sources.commitProtocolClass=org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --conf spark.datasource.hive.warehouse.read.mode=DIRECT_READER_V2 \&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; --class $CLASS $JARS $MYLIB $PROPF $LAUNCH $*";&lt;/P&gt;&lt;P&gt;exec $COMMAND&lt;/P&gt;&lt;P&gt;Probably the difference was in the&amp;nbsp;&lt;SPAN&gt;spark.hadoop.metastore.&lt;/SPAN&gt;&lt;SPAN&gt;catalog.default=hive setting.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;In the above example are some Ansible variables:&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;hwc_jar_path&lt;/SPAN&gt;&lt;SPAN&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;"/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/hive-warehouse-connector-assembly-1.0.0.7.1.7.1000-141.jar"&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;ad_realm is our LDAP realm.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Hope it helps anyone.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 16 May 2024 12:48:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-1-Spark-2-3-2-hive-table-quot-default-table1-quot-show/m-p/387943#M246501</guid>
      <dc:creator>marcel-jan</dc:creator>
      <dc:date>2024-05-16T12:48:50Z</dc:date>
    </item>
  </channel>
</rss>

