1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
794 | 04-03-2024 06:39 AM | |
1533 | 01-12-2024 08:19 AM | |
783 | 12-07-2023 01:49 PM | |
1346 | 08-02-2023 07:30 AM | |
1949 | 03-29-2023 01:22 PM |
10-11-2016
08:25 PM
@Jasper please try here https://s3-us-west-1.amazonaws.com/hdb2-on-hdp/Hortonworks_May_2016.vmwarevm.7z
... View more
07-05-2016
05:42 PM
Also, if you stop the phoenix shell and then open it again, the warning message will not come.
... View more
07-26-2018
02:26 PM
My Yarn UI is kerberos enabled. getHTTP complaining about 401 authentication error. Is there any work around for this?
... View more
08-24-2016
05:55 AM
By increasing the width of the chrome (browser) window on Zeppelin (I am accessing zeppelin on 127.0.0.1:9995/#/), it switched on the zeppelin clone button bar
... View more
07-01-2016
08:32 PM
1 Kudo
I follow the examples from 3 Pillar Global Post and Apache Hbase Blog Post and then updated for newer versions. To write an HBase Coprocessor you need Google Protocol Buffers. To use recent versions, on a Mac you need to install v2.50 as that works with HBase: brew tap homebrew/versions
brew install protobuf250
Check your version protoc
--version Source Code with Maven Build (pom.xml) is here. You will need Maven and Java 7 (or newer) JDK for compilation. Testing on Hadoop export HADOOP_CLASSPATH=`hbase classpath`
hadoop jar hbasecoprocessor-1.0.jar com.dataflowdeveloper.hbasecoprocessor.SumEndPoint
Upload your Jar to HDFS hadoop fs -mkdir /user/tspann
hadoop fs -ls /user/tspann hadoop fs -put hbasecoprocessor-1.0.jar /user/tspann hadoop fs -chmod 777 /user/tspann/hbasecoprocessor-1.0.jar Install Dynamically disable 'stocks'
alter 'stocks',
'coprocessor'=>'hdfs://sandbox.hortonworks.com/user/tspann/hbasecoprocessor-1.0.jar|com.dataflowdeveloper.hbasecoprocessor.SumEndPoint|1001|arg1=1'
enable 'stocks'
describe 'stocks'
Testing Locally java -classpath `hbase classpath`:hbasecoprocessor-1.0.jar com.dataflowdeveloper.hbasecoprocessor.SumEndPoint Checking Table After Installation [root@sandbox demo]# hbase shell
HBase Shell; enter
Version
1.1.2.2.4.0.0-169, r61dfb2b344f424a11f93b3f086eab815c1eb0b6a, Wed Feb 10
07:08:51 UTC 2016
hbase(main):001:0> describe 'stocks'
Table stocks is
ENABLED
stocks,
{TABLE_ATTRIBUTES => {coprocessor$1 =>
'hdfs://sandbox.hortonworks.com/user/tspann/hbasecoprocessor-1.0.jar|com.dataflowdeveloper.hbasecoprocessor.SumEndPoint|1001|arg1=1'} COLUMN FAMILIES
DESCRIPTION {NAME => 'cf',DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE
=> '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER',
MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE =>
'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
1 row(s) in 0.3270
seconds
You can see the coprocessor has been added and is enabled.
References
https://github.com/larsgeorge/hbase-book/blob/master/ch04/src/main/java/coprocessor/RowCountEndpoint.java https://github.com/Huawei-Hadoop/hindex https://github.com/apache/hbase/tree/branch-1.0/hbase-examples https://github.com/apache/hbase/blob/branch-1.0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java http://bigdatazone.blogspot.com/2015/05/hbase-coprocessor-using-protobuf-250.html http://hbase.apache.org/book.html#cp http://hbase.apache.org/book.html#cp_loading https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/package-summary.html http://hbase.apache.org/book.html#cp_example http://hbase.apache.org/xref/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.html https://github.com/apache/hbase/blob/branch-1.1/hbase-examples/pom.xml https://community.hortonworks.com/questions/2577/hbase-coprocessor-and-security.html https://www.3pillarglobal.com/insights/hbase-coprocessors#(endpoints-coprocessor) https://blogs.apache.org/hbase/entry/coprocessor_introduction https://github.com/dbist/HBaseUnitTest
... View more
Labels:
11-02-2016
06:11 PM
@sai d You must have VT-x features enabled within your computer BIOS. This is a common requirement for most Virtual Machines these days. What kind of computer are you using? Have you enabled VT-x?
... View more
07-01-2016
12:11 AM
2 Kudos
@Timothy Spann
This is as designed. Typically for GA'd releases we do release notes that mention this behavior, since this is a tech preview we didn't provide it. The services that are turned off on start are hbase,atlas, storm, kafka, logsearch, falcon. They should have maintenance mode enabled. HDFS does not have secondary namenode on..since not needed.
... View more
06-30-2016
06:09 PM
3 Kudos
@Timothy Spann You should use the MaxMindDB format. I've always used the City database, located at http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
... View more
06-30-2016
06:36 PM
http://www.gdal.org/ogr2ogr.html works but would lose any extra information in the XML Big Collection of Tools https://trac.osgeo.org/osgeo4w/ GIS Tools http://esri.github.io/gis-tools-for-hadoop/ Java Library for GEO https://github.com/mraad/Shapefile Hive Spatial https://github.com/Esri/spatial-framework-for-hadoop/wiki/Hive-Spatial Cool Tool, but no KML http://terraformer.io/
... View more
06-29-2016
06:08 PM
it's the example on for Sum
... View more