Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hbase table issue while reading data through phoenix driver

avatar
Explorer

Hi Team,

I am having two issues with cloud era Hbase connection:
1. Connecting Hbase through the Phoenix driver is not working.

2. Somehow I am able to connect hbase through hbase shell and inserted data. But I have a problem while reading data from Phoenix driver.

The steps I followed are below:
created habse Table using sqlline and inserting data through hbase shell put API from spark java application. 
But my consumer app reading data through phoenix driver.

can you please help me any table configuration helps me to use hbase table from hbase shell as we all phoenix driver. 

Issue with existing table is: 

I am able to query data properly through hbase shell command, But when I am querying data through Phoenix driver rowkey value is getting truncated (only the first letter) and other columns are good.

While creating a table, using the following configurations:
Column_encoded_bytes=0, slatbucket=88, Compression=snappy, data_block_encoding= fast_DIFF

13 REPLIES 13

avatar
Community Manager

@bavisetti, Welcome to our community! To help you get the best possible answer, I have tagged our HBase/Phoenix experts @smdas @rki_ @willx @Samanta001  who may be able to assist you further.

Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Explorer

any help on this issue,

avatar
Explorer

Any solution, core hbase i am able to see data properly, why it is getting truncated when reading through phoenix. Any suggestion ?

avatar
Master Collaborator

Hi @bavisetti ,

  •  Please kindly provide your CDH/CDP version, hbase version, phoenix version and the Phoenix driver version.
  • Are you able to use the phoenix driver to create phoenix table, upsert into it and select data from it?
  • Are you able to do above in phoenix-sqlline.
  • If you already have a hbase table, you may need to create a view in phoenix so the phoenix client could be able to read it.

Refer to https://phoenix.apache.org/language/index.html#create_view

 

Thanks,

Will

avatar
Explorer

@willx,

Cdh : 7.1.6.cdhp150

Hbase version: 2.2.3.7.1.6.150-1

Phoenix version: phoenix 6.0.0.7.1.6.150-1

Phoenix driver version : 6.0.0.7

I found issue is with salt bucket .. is there any way to add regionsplitpolicy while inserting data through hbase shell?

avatar
Explorer

Is there any way to configure KeyPrefixRegionSplitPolicy while inserting data through hbase connector through Java spark application.

avatar
Master Collaborator

Please refer to this doc https://blog.cloudera.com/apache-hbase-region-splitting-and-merging/ for split policy.

 

So far based on your statement I cannot conclude it is due to salt bucket or split policy, we need more evidence from logs.

 

So we would suggest you raise a Cloudera support case. We need to collect some necessary information and logs to investigate.

 

Please make sure the above questions are answered, in addition, we also need to collect:

hbase:

- echo "scan 'namespace:tablename'" > /tmp/scan_meta.txt

- echo "describe 'namespace:tablename'" > /tmp/desc_table.txt

- echo "list_regions 'namespace:tablename'">/tmp/list_regions.txt

phoenix-sqlline:

- select * from system.catalog;

- !tables

- select * from namespace.table;

- Your client code of using phoenix driver and the output reflects the issue "when I am querying data through Phoenix driver rowkey value is getting truncated (only the first letter) and other columns are good."

avatar
Explorer

Hi @willx,

I confirmed it is salt bucket issue by creating a table with salt bucket =0 and it worked well.
and also created a sample table through sqlline with slat bucket >0 and inserted data from HBase shell and sqlline. where data inserted through sqlline it is prefixed with an extra one character, whereas HBase it is exactly what value I inserted. 

Can you please help me how to create a table with Phoenix driver with different region policies instead of prefix rowkey or how to get random prefix to append through hbase shell.

 

Thanks,

Jyothsna

avatar
Explorer

Hi @willx ,

Can you please suggest if is there any way to disable the prefix rowkey regional policy and use a different regional policy while creating the Phoenix table?

Thanks,
Jyothsna