Created 06-14-2016 02:29 PM
Can someone please do a base out of the box Hortonworks sandbox VM and use one of the following repos from github or any spring framework with hbase. And tell me what the heck I am doing wrong? I have tried multiple hithub codes to see how this stuff works and I cannot get anything to connect successfully. I have been fighting this for a week now and I cannot seem to gain any ground.
Hortonworks VM - HDP™ 2.4 on Hortonworks Sandbox
Repo 1: https://github.com/spring-projects/spring-hadoop-samples/
Repo 2: https://github.com/spring-projects/spring-data-book/tree/master/hadoop
I cannot get either of these to work, I have also tried just using a basic
Configuration c = HBaseConfiguration.create(); HBaseAdmin.checkHBaseAvailable(c);
with the following hbase-site.xml
<configuration> <property> <name>dfs.domain.socket.path</name> <value>/var/lib/hadoop-hdfs/dn_socket</value> </property> <property> <name>hbase.bucketcache.ioengine</name> <value></value> </property> <property> <name>hbase.bucketcache.percentage.in.combinedcache</name> <value></value> </property> <property> <name>hbase.bucketcache.size</name> <value></value> </property> <property> <name>hbase.bulkload.staging.dir</name> <value>/apps/hbase/staging</value> </property> <property> <name>hbase.client.keyvalue.maxsize</name> <value>1048576</value> </property> <property> <name>hbase.client.retries.number</name> <value>35</value> </property> <property> <name>hbase.client.scanner.caching</name> <value>100</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.coprocessor.master.classes</name> <value>org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor</value> </property> <property> <name>hbase.coprocessor.region.classes</name> <value>org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor</value> </property> <property> <name>hbase.coprocessor.regionserver.classes</name> <value></value> </property> <property> <name>hbase.defaults.for.version.skip</name> <value>true</value> </property> <property> <name>hbase.hregion.majorcompaction</name> <value>604800000</value> </property> <property> <name>hbase.hregion.majorcompaction.jitter</name> <value>0.50</value> </property> <property> <name>hbase.hregion.max.filesize</name> <value>10737418240</value> </property> <property> <name>hbase.hregion.memstore.block.multiplier</name> <value>4</value> </property> <property> <name>hbase.hregion.memstore.flush.size</name> <value>134217728</value> </property> <property> <name>hbase.hregion.memstore.mslab.enabled</name> <value>true</value> </property> <property> <name>hbase.hstore.blockingStoreFiles</name> <value>10</value> </property> <property> <name>hbase.hstore.compaction.max</name> <value>10</value> </property> <property> <name>hbase.hstore.compactionThreshold</name> <value>3</value> </property> <property> <name>hbase.local.dir</name> <value>${hbase.tmp.dir}/local</value> </property> <property> <name>hbase.master.info.bindAddress</name> <value>0.0.0.0</value> </property> <property> <name>hbase.master.info.port</name> <value>16010</value> </property> <property> <name>hbase.master.port</name> <value>16000</value> </property> <property> <name>hbase.region.server.rpc.scheduler.factory.class</name> <value></value> </property> <property> <name>hbase.regionserver.global.memstore.size</name> <value>0.4</value> </property> <property> <name>hbase.regionserver.handler.count</name> <value>30</value> </property> <property> <name>hbase.regionserver.info.port</name> <value>16030</value> </property> <property> <name>hbase.regionserver.port</name> <value>16020</value> </property> <property> <name>hbase.regionserver.wal.codec</name> <value>org.apache.hadoop.hbase.regionserver.wal.WALCellCodec</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8020/apps/hbase/data</value> </property> <property> <name>hbase.rpc.controllerfactory.class</name> <value></value> </property> <property> <name>hbase.rpc.engine</name> <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value> </property> <property> <name>hbase.rpc.protection</name> <value>PRIVACY</value> </property> <property> <name>hbase.rpc.timeout</name> <value>90000</value> </property> <property> <name>hbase.security.authentication</name> <value>simple</value> </property> <property> <name>hbase.security.authorization</name> <value>true</value> </property> <property> <name>hbase.superuser</name> <value>hbase</value> </property> <property> <name>hbase.tmp.dir</name> <value>/tmp/hbase-${user.name}</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>localhost</value> </property> <property> <name>hbase.zookeeper.useMulti</name> <value>true</value> </property> <property> <name>hfile.block.cache.size</name> <value>0.40</value> </property> <property> <name>phoenix.functions.allowUserDefinedFunctions</name> <value> </value> </property> <property> <name>phoenix.query.timeoutMs</name> <value>60000</value> </property> <property> <name>zookeeper.session.timeout</name> <value>60000</value> </property> <property> <name>zookeeper.znode.parent</name> <value>/hbase-unsecure</value> </property> </configuration>
I have tried asking questions about error messages and I still get no where so is it possible for someone to just try using one of those github code bases and tell me what I am doing wrong? I just cannot seem to get anywhere and it is so frustrating. I hope someone can help, thanks a lot.
Created 06-17-2016 01:54 PM
I finally got it up and running. Even tho the port we are trying to go through for hbase is 2181 there are also other ports that must be opened. So the main issue I kept running into is the ports needing to be open for 16000 and 16020, it looks like hortonworks opens 16010 and 16030 for HBase but not 16000 or 16020. Once I opened these ports I was able to connect externally via Java.
Created 06-14-2016 02:49 PM
Last commit to https://github.com/spring-projects/spring-hadoop-samples was 11 months ago.
The other one was even older.
See if you can find newer repo.
If I have time, I plan to try the first repo.
Created 06-14-2016 03:02 PM
@Ted Yu I will look to see if I can find any earlier, I tried looking before but everything I found wouldn't work. But it would be a great help if you did try! I would appreciate that so much! Thanks for the help.
Created 06-14-2016 04:40 PM
This is what I did:
cloned spring-hadoop-samples
cd spring-hadoop-samples/hbase
mvn clean package
copied hbase-site.xml from cluster to target/appassembler/etc/hbase-site.xml
sh ./target/appassembler/bin/usercount
Please try the above
Created 06-14-2016 04:53 PM
I verified that users table was created after running the usercount example:
'users', {NAME => 'cfInfo'}
Created 06-14-2016 05:06 PM
Note, here is snippet w.r.t. CLASSPATH in the sample script:
CLASSPATH=..."$REPO"/hbase-client-0.98.5-hadoop2.jar:"$REPO"/hbase-common-0.98.5-hadoop2.jar:"$REPO"/hbase-protocol-0.98.5-hadoop2.jar
You can plug in the corresponding version of hbase jars accordingly
0.98 is compatible with 1.x release - that was why the sample worked.
Created 06-14-2016 05:11 PM
That was due to the following config having wrong value - plug ing actual quorum:
Created 06-14-2016 05:47 PM
@Ted Yu Does this mean you did try it locally instead of on the VM?
Created 06-14-2016 05:12 PM
@Ted Yu, is this from external source or on the same machine? Because when I run commands on the VM everything is fine. Its just connecting to the zookeeper externally is where I am having the issues. I am running my VM on the desktop but then I am using code on the desktop to connect to the VM hbase.
Created 06-14-2016 05:47 PM
Running sample outside VM should work as long as correct quorum is in hbase-site.xml