Support Questions
Find answers, ask questions, and share your expertise

Using Spark Hbase Connector on CDH 6.3.2 Spark 2.4 HBase 2.1

New Contributor

Using Spark Hbase Connector. 

Cloudera distribution: 6.3.2

HBase version: 2.1.0

Scala Version: 2.11.12


spark-hbase connector version 1.1.1-2.1-s_2.11, it fails with below error java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue; at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:257)


Searching over internet pointed out, shc-core:1.1.3-2.4-s_2.11.jar has solved the issue. 

Could not find a repository to download.

Any suggestion please?





Expert Contributor

You can use public hortonworks repo -


You may not find exact version which you mentioned. But you can check repo, you can use dependencies as per your cluster version.


You can try below  dependency :




Let me know it works. It should be compatible.


New Contributor

Hi i am facing the same error for this topic, but before changing the version of shc-core the error changed to it:


Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.parser.AbstractSqlParser: method <init>()V not found
at org.apache.spark.sql.execution.SparkSqlParser.<init>(SparkSqlParser.scala:42)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.sqlParser$lzycompute(BaseSessionStateBuilder.scala:117)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.sqlParser(BaseSessionStateBuilder.scala:116)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1104)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:145)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:144)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:144)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:141)
at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:788)
at ContactEngineRun$.main(ContactEngineRun.scala:26) -> this line of code is above


val hbaseDF =
  .options(Map(HBaseTableCatalog.tableCatalog -> catalog))


this is my build dependencies:


val spark_version = "2.4.4"
++= Seq(
"org.scala-lang" % "scala-library" % "2.11.12",
"org.apache.spark" %% "spark-streaming" % spark_version,
"org.apache.spark" %% "spark-core" % spark_version,
"org.apache.spark" %% "spark-sql" % spark_version,
"org.apache.hbase.connectors.spark" % "hbase-spark" % "",
"org.apache.hbase" % "hbase-client" % "2.4.0",
"com.hortonworks.shc" % "shc-core" % ""

 Can someone help me?


@SibDe Sorry to disturb you. You're using Cloudera Manager 6.3.x? Could you please help me to verify a issue I encountered now? FYI. I upgraded Cloudera Manager from 5.10.0 to 6.3.0, then the following API seems not work well.

Can you help to verify if 'wget http://<cm_server_host>:7180/api/v13/clusters/<cluster_name>/services/<service_name>/clientConfig'  can work as expected? Thanks in advance!!



New Contributor

Were you able to find an alternative since the jar doesn't seem to be available in Maven repository yet?

While trying to import through -- packages ,it throws below error:


Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: com.hortonworks#shc;1.1.3-2.4-s_2.11: not found]

New Contributor

@lerner like @schhabra1 says:

You can use public hortonworks repo -

New Contributor

Sorry for distrubing. I am working on a toy project which needs to insert spark data frame into hbase. 


Apache Kafka
Apache Spark
Apache HBase

After reading some posts about spark hbase connector, I decided to to hortonworks spark habse connector.

I am wondering if I need the HBase client configuration file hbase-site.xml for hortonworks spark hbase connector when I am working in CDH environment?


Thanks for your help in advance!

; ;