Member since
10-22-2015
241
Posts
86
Kudos Received
20
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1594 | 03-11-2018 12:01 AM | |
713 | 01-03-2017 10:48 PM | |
868 | 12-20-2016 11:11 PM | |
2156 | 09-03-2016 01:57 AM | |
791 | 09-02-2016 04:55 PM |
03-11-2018
12:22 AM
Compactor calls getCompactionCompressionType() and uses it during compaction: this.compactionCompression = (this.store.getColumnFamilyDescriptor() == null) ? Compression.Algorithm.NONE : this.store.getColumnFamilyDescriptor().getCompactionCompressionType(); While getCompressionType() is used by the flusher - see code in DefaultStoreFlusher#flushSnapshot(): writer = store.createWriterInTmp(cellsCount, store.getColumnFamilyDescriptor().getCompressionType(), false, true, snapshot.isTagsPresent(), false);
... View more
03-11-2018
12:01 AM
Please read the following: https://blogs.apache.org/hbase/entry/the_effect_of_columnfamily_rowkey http://hadoop-hbase.blogspot.com/2016/02/hbase-compression-vs-blockencoding_17.html
... View more
10-27-2017
07:16 PM
Did you mean that the application spends more time in computation compared to retrieval of information stored in hbase ? Can you describe your use case in more detail ?
... View more
10-27-2017
06:53 PM
1 Kudo
hbase.backup.enabled should be set to true in hbase-site.xml which is deployed on the cluster.
... View more
10-23-2017
06:21 PM
Please take a look at: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_spark-component-guide/content/spark-on-hbase.html
... View more
10-15-2017
06:20 PM
Which version of HDP are you using ? Can you post the full stack trace ?
... View more
10-11-2017
08:32 PM
Is hbase.replication.conf.dir defined on every node in the peer cluster ? Specifying /tmp is not good idea since the files under /tmp would be periodically cleaned.
... View more
10-06-2017
06:27 PM
1 Kudo
You should also consider the space occupied by archived files. e.g. /apps/hbase/data/archive/data/default/50m where 50m is table name and /apps/hbase/data is the rootdir
... View more
10-02-2017
07:31 PM
kafka-consumer-groups.sh calls ConsumerGroupCommand Please take a look at the following method: ConsumerGroupCommand#getPartitionOffsets
... View more
10-02-2017
05:07 PM
Which release of HDP did you install ? Can you attach relevant portion of hbase master log here ? Thanks
... View more
09-19-2017
11:06 PM
From hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java : public static final String NAME = "completebulkload"; This class is in hbase-server jar
... View more
09-19-2017
09:10 PM
Please take a look at http://hbase.apache.org/book.html#arch.bulk.load
... View more
09-19-2017
09:04 PM
hfile can be loaded thru bulk load.
... View more
09-17-2017
06:22 PM
Please take a look at past thread: http://search-hadoop.com/m/HBase/YGbbj6sIj2Ecwi62?subj=Re+ERROR+Can+t+get+master+address+from+ZooKeeper+znode+data+null
... View more
09-16-2017
08:29 PM
Cassandra doesn't use hdfs Comparison between Cassandra and hbase is sometimes subjective. e.g. https://stackoverflow.com/questions/14950728/why-hbase-is-a-better-choice-than-cassandra-with-hadoop Please let us know your use case so that better advice can be given. Note: hbase has Replica Read feature which makes hbase highly available.
... View more
09-14-2017
08:58 PM
HFileOutputFormat2 is in hbase-server in hbase 1.1.x The mvn dependency:tree output for hbase-server is 200 lines long. You can get the output yourself (for your project). If you encounter problem, let me know.
... View more
09-14-2017
08:16 PM
Can you take a look at hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java getNewWriter() shows you how to create StoreFile.Writer
... View more
09-14-2017
06:12 PM
Can you take a look at: http://hbase.apache.org/book.html#_writing_hfiles_directly_during_bulk_import which points to: http://hbase.apache.org/book.html#arch.bulk.load
... View more
09-13-2017
06:23 PM
bq. Thread.sleep("sometime"); How long was the sleep ? I might be possible that compaction finished after the sleep. Which HDP version are you using ?
... View more
09-13-2017
06:18 PM
Before PHOENIX-3288 is resolved, please set phoenix.schema.isNamespaceMappingEnabled to true. hbase-site.xml should be on the classpath
... View more
09-12-2017
06:16 PM
For hbase, you can obtain configuration from the following URL: master-node:16010/conf
... View more
09-11-2017
09:36 PM
1 Kudo
bq. closing the JDBC connection properly in finally blocks Mind sharing the relevant code ? Thanks
... View more
09-11-2017
06:30 PM
1 Kudo
bq. increase in the number of open files in for the Java process. Which process did you refer to ? I assume you were talking about your application.
... View more
01-04-2017
11:32 AM
bq. is there a different version for Netty required ? As I said above, HDP 2.4 came with 3.2.4.Final which is different from 3.9.4.Final (opentsdb built with)
... View more
01-03-2017
11:55 PM
bq. OpenTSDB only has Netty 3.2.4 jar But the list below showed netty-3.9.4.Final.jar
... View more
01-03-2017
11:35 PM
As you can see, there are two netty jars on the classpath with different version. I am not aware of plan to support OpenTSDB. See if you can build OpenTSDB with the netty version hbase uses. Shading is another option but I am not expert there.
... View more
01-03-2017
10:55 PM
Here is HDP 2.4's dependency: [INFO] org.apache.hbase:hbase-it:jar:1.1.2 ... [INFO] +- org.jboss.netty:netty:jar:3.2.4.Final:compile
... View more
01-03-2017
10:52 PM
The above seems to indicate version mismatch of netty between OpenTSDB and HDP 2.4 Can you get the version of netty for OpenTSDB 2.2.1 ? Thanks
... View more
01-03-2017
10:49 PM
bq. HBase versions shipped with HDP2.4 and 2.5, both say 1.1.2 Both 2.4 and 2.5 were based on hbase 1.1.2 but 2.5 had a lot more fixes / backports (MOB being one of them)
... View more