Created on 04-10-2023 04:22 AM - edited 04-10-2023 04:42 AM
Hi Team,
I am having two issues with cloud era Hbase connection:
1. Connecting Hbase through the Phoenix driver is not working.
2. Somehow I am able to connect hbase through hbase shell and inserted data. But I have a problem while reading data from Phoenix driver.
The steps I followed are below:
created habse Table using sqlline and inserting data through hbase shell put API from spark java application.
But my consumer app reading data through phoenix driver.
can you please help me any table configuration helps me to use hbase table from hbase shell as we all phoenix driver.
Issue with existing table is:
I am able to query data properly through hbase shell command, But when I am querying data through Phoenix driver rowkey value is getting truncated (only the first letter) and other columns are good.
While creating a table, using the following configurations:
Column_encoded_bytes=0, slatbucket=88, Compression=snappy, data_block_encoding= fast_DIFF
Created on 04-10-2023 08:19 AM - edited 04-10-2023 08:20 AM
@bavisetti, Welcome to our community! To help you get the best possible answer, I have tagged our HBase/Phoenix experts @smdas @rki_ @willx @Samanta001 who may be able to assist you further.
Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.
Regards,
Vidya Sargur,Created 04-14-2023 03:43 AM
any help on this issue,
Created 04-10-2023 08:07 PM
Any solution, core hbase i am able to see data properly, why it is getting truncated when reading through phoenix. Any suggestion ?
Created 04-10-2023 11:07 PM
Hi @bavisetti ,
Refer to https://phoenix.apache.org/language/index.html#create_view
Thanks,
Will
Created 04-11-2023 03:48 AM
Cdh : 7.1.6.cdhp150
Hbase version: 2.2.3.7.1.6.150-1
Phoenix version: phoenix 6.0.0.7.1.6.150-1
Phoenix driver version : 6.0.0.7
I found issue is with salt bucket .. is there any way to add regionsplitpolicy while inserting data through hbase shell?
Created on 04-11-2023 04:30 AM - edited 04-11-2023 04:42 AM
Is there any way to configure KeyPrefixRegionSplitPolicy while inserting data through hbase connector through Java spark application.
Created 04-11-2023 05:33 AM
Please refer to this doc https://blog.cloudera.com/apache-hbase-region-splitting-and-merging/ for split policy.
So far based on your statement I cannot conclude it is due to salt bucket or split policy, we need more evidence from logs.
So we would suggest you raise a Cloudera support case. We need to collect some necessary information and logs to investigate.
Please make sure the above questions are answered, in addition, we also need to collect:
hbase:
- echo "scan 'namespace:tablename'" > /tmp/scan_meta.txt
- echo "describe 'namespace:tablename'" > /tmp/desc_table.txt
- echo "list_regions 'namespace:tablename'">/tmp/list_regions.txt
phoenix-sqlline:
- select * from system.catalog;
- !tables
- select * from namespace.table;
- Your client code of using phoenix driver and the output reflects the issue "when I am querying data through Phoenix driver rowkey value is getting truncated (only the first letter) and other columns are good."
Created 04-11-2023 09:13 AM
Hi @willx,
I confirmed it is salt bucket issue by creating a table with salt bucket =0 and it worked well.
and also created a sample table through sqlline with slat bucket >0 and inserted data from HBase shell and sqlline. where data inserted through sqlline it is prefixed with an extra one character, whereas HBase it is exactly what value I inserted.
Can you please help me how to create a table with Phoenix driver with different region policies instead of prefix rowkey or how to get random prefix to append through hbase shell.
Thanks,
Jyothsna
Created 04-11-2023 10:50 PM
Hi @willx ,
Can you please suggest if is there any way to disable the prefix rowkey regional policy and use a different regional policy while creating the Phoenix table?
Thanks,
Jyothsna