Member since
10-05-2015
105
Posts
83
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1362 | 07-13-2017 09:13 AM | |
1601 | 07-11-2017 10:23 AM | |
822 | 07-10-2017 10:43 AM | |
3747 | 03-23-2017 10:32 AM | |
3650 | 03-23-2017 10:04 AM |
10-04-2016
06:13 AM
1 Kudo
@SBandaru, For HBase tuning you can refer following links: http://www.slideshare.net/lhofhansl/h-base-tuninghbasecon2015ok https://community.hortonworks.com/questions/1277/what-is-the-best-consolidated-guide-for-hbase-tuni.html For Phoenix tuning following link helps: http://phoenix.apache.org/tuning.html Use http://phoenix.apache.org/update_statistics.html for more parallelization and better performance. For more phoenix level optimizations You can refer Optimization sections in http://www.slideshare.net/je2451/apache-phoenix-and-apache-hbase-an-enterprise-grade-data-warehouse
... View more
10-04-2016
05:51 AM
@Ramy Mansour if phoenix schema you are going to map to HBase table have any composite primary key, data types other than strings or secondary indexes then you can use CsvBulkLoadTool otherwise you can go ahead with ImportTsv which performs better. And the remaining optimizations helps for both the cases so you can use them.
... View more
09-30-2016
06:15 AM
4 Kudos
@Ramy Mansour You can directly create table in phoenix and load data using CsvBulkLoadTool. http://phoenix.apache.org/bulk_dataload.html#Loading_via_MapReduce Currently with your data there will be 1000's of mappers running. The number of reducers depending on the number regions so increase the parallelization you can presplit the table by providing split points in DDL statement. You can also compress the table to reduce IO or shuffle data during bulkload tool. http://phoenix.apache.org/language/index.html#create_table Or else you can directly use ImportTsv and completeBulkload bulkload tools for loading data into HBase table directly. https://hbase.apache.org/book.html#importtsv https://hbase.apache.org/book.html#completebulkload Here some more configurations can be provided to mapred-site.xml to improve the job performance. <property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapred.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
... View more
09-29-2016
06:58 AM
1 Kudo
You can get more info here: https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration http://hortonworks.com/blog/hbase-via-hive-part-1/
... View more
09-11-2016
02:38 AM
1 Kudo
Here you get the more details: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_Sys_Admin_Guides/content/ch_clust_capacity.html
... View more
09-09-2016
05:48 AM
6 Kudos
@Dheeraj Madan After altering the table you can run major compaction on the table from hbase shell so that existing data will be automatically compressed. > major_compact 'SNAPPY_TABLE'
... View more
08-11-2016
01:25 PM
2 Kudos
Got the logs from @nshetty and checked somehow the hbase:acl table is not created because earlier hbase:acl znode is present but the table is not present. Once after deleting the hbase:acl table znode and restarting the service it's working fine.
... View more
08-03-2016
04:54 AM
I see. Can you see the regionserver UI whether any requests coming to the table or not? or Can you scan HBase table to check any data in it? If you can provide any logs that I can take a look and try to help you out.
... View more
08-03-2016
03:41 AM
1 Kudo
It seems that PDI is not supporting Apache Phoenix yet. While loading data PDI generating insert queries but Phoenix needs upsert queries. http://jira.pentaho.com/browse/PDI-14038 https://mail-archives.apache.org/mod_mbox/phoenix-user/201509.mbox/%3CCAB3fahz1wacofQHDTyMMO-W_nCQ0gAcL0LpXdeMmZMxVXwwWOg@mail.gmail.com%3E It would be better to contact Pentaho PDI community.
... View more
08-01-2016
04:12 PM
@Habeeb Shana'a Transactions support is there from Phoenix 4.7.x onwards and HDP 2.5 is going to have it as Tech Preview.
... View more