Posts: 15
Registered: ‎09-01-2015

Issue in connecting HBase CLoudera Quick start VM CDH5.8.0 from Scala code located in windows 7

Hi All,


My this code (Please see the solution part in this post) is working very fine if my sbt project folder is inside my Cloudera VM. 

Now I am trying to connect the same HBase table from my windows machine on which the Cloudera VM is running.

I just did following changes in my project code"

  1. I copied my project root directory structure "/play-sbt-project/*" to my windows D: drive.
  2. Added below line of code inside /play-sbt-project/src/main/scala/pw.scala file
    conf.set("hbase.zookeeper.quorum","") // IP address of my Cloudera virtual machine.
    conf.set("", "2181")
  3. My new "pw.scala" looks like this:


package main.scala

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HTable,Put}
import org.apache.hadoop.hbase.util.Bytes

object Hi {

 def main(args: Array[String]) = {
 val conf:Configuration = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "") //IP address of my Cloudera virtual machine
conf.set("", "2181") val table:HTable = new HTable(conf, "emp1") val put1:Put = new Put(Bytes.toBytes("row1")) put1.add(Bytes.toBytes("personal_data"),Bytes.toBytes("qual1"),Bytes.toBytes("val1")) table.put(put1) println("Success") } }


  1. I haven't done any thing related to my CLASSPATH variabe om my windows machine. If I need to do any changes related to my CLASSPATH veriable, how and where I should do those changes?
  2. And finally running the "sbt run" command from my project root directory.on my Windows machine.

I am getting below error: 

D:\scala-hbase>sbt run
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; sup
port was removed in 8.0
[info] Set current project to scala-hbase (in build file:/D:/scala-hbase/)
[info] Running Hi
log4j:WARN No appenders could be found for logger (
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See for more in
[error] (run-main-0) org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsE
xception: Failed 1 action: UnknownHostException: 1 time,
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 ac
tion: UnknownHostException: 1 time,
        at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException
        at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(A
        at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndR
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCom
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMuta
        at org.apache.hadoop.hbase.client.HTable.flushCommits(
        at org.apache.hadoop.hbase.client.HTable.put(
        at Hi$.main(hw.scala:16)
        at Hi.main(hw.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
        at java.lang.reflect.Method.invoke(
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
        at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 533 s, completed Mar 31, 2017 1:50:22 AM


From the error log I can see it is saying 

xception: Failed 1 action: UnknownHostException: 1 time


But I am not able to rectify this error.

Any help related to this is highly appriciated. Thanks!