<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Exception while using Spark HBase Connector on HDP2.6 in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exception-while-using-Spark-HBase-Connector-on-HDP2-6/m-p/181657#M73258</link>
    <description>&lt;P&gt;Solved it. It was missing values for the RowKey as pointed out by the error:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.initRowKey(HBaseTableCatalog.scala:141) at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.&amp;lt;init&amp;gt;(HBaseTableCatalog.scala:152) at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:209) at &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;I created the Row object which included all dataframe columns and then it worked. &lt;/P&gt;</description>
    <pubDate>Wed, 10 Jan 2018 05:05:02 GMT</pubDate>
    <dc:creator>mrizvi</dc:creator>
    <dc:date>2018-01-10T05:05:02Z</dc:date>
    <item>
      <title>Exception while using Spark HBase Connector on HDP2.6</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exception-while-using-Spark-HBase-Connector-on-HDP2-6/m-p/181656#M73257</link>
      <description>&lt;P&gt;Hi Guys,&lt;/P&gt;&lt;P&gt;I am using Spark 1.6.3 and HBase is 1.1.2 on hdp2.6. I have to use Spark 1.6, cannot go to Spark 2. The connector jar is shc-1.0.0-1.6-s_2.10.jar. I am writing to hbase table from the pyspark dataframe:&lt;/P&gt;&lt;PRE&gt;cat = json.dumps({"table":{"namespace":"dsc", "name":"table1", "tableCoder":"PrimitiveType"},"rowkey":"key","columns": {"individual_id":{"cf":"rowkey", "col":"key", "type":"string"}, "model_id":{"cf":"cf1", "col":"model_id", "type":"string"}, "individual_id":{"cf":"cf1", "col":"individual_id", "type":"string"}, "individual_id_proxy":{"cf":"cf1", "col":"individual_id_proxy", "type":"string"}}})

df.write.option("catalog",cat).format("org.apache.spark.sql.execution.datasources.hbase").save()
&lt;/PRE&gt;&lt;P&gt;The error is:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;An error occurred while calling o202.save.
: java.lang.UnsupportedOperationException: empty.tail
        at scala.collection.TraversableLike$class.tail(TraversableLike.scala:445)
        at scala.collection.mutable.ArraySeq.scala$collection$IndexedSeqOptimized$super$tail(ArraySeq.scala:45)
        at scala.collection.IndexedSeqOptimized$class.tail(IndexedSeqOptimized.scala:123)
        at scala.collection.mutable.ArraySeq.tail(ArraySeq.scala:45)
        at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.initRowKey(HBaseTableCatalog.scala:141)
        at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.&amp;lt;init&amp;gt;(HBaseTableCatalog.scala:152)
        at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:209)
        at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.&amp;lt;init&amp;gt;(HBaseRelation.scala:163)
        at org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:58)
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:209)
        at java.lang.Thread.run(Thread.java:745)&lt;/STRONG&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Please let me know if anyone has come across this. &lt;/P&gt;</description>
      <pubDate>Tue, 09 Jan 2018 07:01:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exception-while-using-Spark-HBase-Connector-on-HDP2-6/m-p/181656#M73257</guid>
      <dc:creator>mrizvi</dc:creator>
      <dc:date>2018-01-09T07:01:10Z</dc:date>
    </item>
    <item>
      <title>Re: Exception while using Spark HBase Connector on HDP2.6</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exception-while-using-Spark-HBase-Connector-on-HDP2-6/m-p/181657#M73258</link>
      <description>&lt;P&gt;Solved it. It was missing values for the RowKey as pointed out by the error:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.initRowKey(HBaseTableCatalog.scala:141) at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.&amp;lt;init&amp;gt;(HBaseTableCatalog.scala:152) at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:209) at &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;I created the Row object which included all dataframe columns and then it worked. &lt;/P&gt;</description>
      <pubDate>Wed, 10 Jan 2018 05:05:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exception-while-using-Spark-HBase-Connector-on-HDP2-6/m-p/181657#M73258</guid>
      <dc:creator>mrizvi</dc:creator>
      <dc:date>2018-01-10T05:05:02Z</dc:date>
    </item>
  </channel>
</rss>

