Created on 07-09-2024 11:33 PM - edited 07-10-2024 12:02 AM
In HBase, i have a column qualifier in which i have a data like below:
ReportV10\x00\x00\x00\x00\x02\x02\x02
When i am reading this table from spark using shc connect, i am getting junk characters in result. Below is the piece of code i am using to read a HBase table:
catalog='''{
"table":{"namespace":"db1","name":"tb1"},
"rowkey":"key",
"columns":{
"rowkey":{"cf":"rowkey","col":"key","type":"string"},
"nf_hh0":{"cf":"nf","col":"hh0","type":"string"}
}
}'''
df=spark.read.option("catalog",catalog).format("org.apache.spark.sql.execution.datasources.hbase").load()
df.show(1,False)
+------------------------------------------------+
| rowkey | nf_hh0 |
+---------------------------+------------------- +
|26273707950926220...|ReportV10�� |
+---------------------------+--------------------+
Spark version: 2.3.2.3.1.0.319-3
HBase version: 2.0.2.3.1.0.319-3
Python version: 2.7.5
Question: Is there any way to read those hexadecimal escape sequences as it is in a dataframe.
Created 09-23-2024 11:55 PM
Hello @ayukus0705 ,
A]
I am looking for an option where we can directly read those hexadecimal escape sequences(i.e., ReportV10\x00\x00\x00\x00\x02\x02\x02) as it is in my spark dataframe.
>> You will have to make sure that escape sequences are considered as raw binary data or strings without any spontaneous decoding or transformation.
Following is an example to read as a binary:
val df = spark.read.format("binaryFile").load("path of your file here")
B]
Alternatively, you can use the HBase Spark connector to load the data as binary. When using the HBase Spark connector, there is no need for any automatic decoding or transformation into the required format. Refer the following docs for more details:
Private Cloud: https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-example-using-hb...?
Public Cloud: https://docs.cloudera.com/runtime/7.2.18/accessing-hbase/topics/hbase-using-hbase-spark-connector.ht...?
If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post.
Thank you.
Created 07-16-2024 01:24 AM
Hi @ayukus0705 Welcome to our community! To help you get the best possible answer, I have tagged in our Spark experts @RangaReddy @Babasaheb who may be able to assist you further.
Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.
Regards,
Vidya Sargur,Created 07-16-2024 01:48 AM
Hi @ayukus0705
The nf_hh0 column data appears to be stored in a format other than string. When you try to read this data using a string data type, it may lead to above issue.
To resolve this issue, you can either change the data type of the column to match the actual data format, or convert the data to a string format.
Created 09-05-2024 12:04 AM
Hi @RangaReddy
Thanks for looking into my question.
change the data type of the column to match the actual data format - I tried passing binary in catalog but had no luck.
convert the data to a string format - It will result in data manipulation on HBase which is not practically a possible solution for us. Also, data size is somewhere around 50-60 TB.
I am looking for an option where we can directly read those hexadecimal escape sequences(i.e., ReportV10\x00\x00\x00\x00\x02\x02\x02) as it is in my spark dataframe.
Let me know if you need further clarity or information, we can setup a meeting to discuss this.
Regards,
Ayush
Created 07-22-2024 01:58 AM
@ayukus0705, Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
Regards,
Vidya Sargur,Created 09-23-2024 11:55 PM
Hello @ayukus0705 ,
A]
I am looking for an option where we can directly read those hexadecimal escape sequences(i.e., ReportV10\x00\x00\x00\x00\x02\x02\x02) as it is in my spark dataframe.
>> You will have to make sure that escape sequences are considered as raw binary data or strings without any spontaneous decoding or transformation.
Following is an example to read as a binary:
val df = spark.read.format("binaryFile").load("path of your file here")
B]
Alternatively, you can use the HBase Spark connector to load the data as binary. When using the HBase Spark connector, there is no need for any automatic decoding or transformation into the required format. Refer the following docs for more details:
Private Cloud: https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-example-using-hb...?
Public Cloud: https://docs.cloudera.com/runtime/7.2.18/accessing-hbase/topics/hbase-using-hbase-spark-connector.ht...?
If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post.
Thank you.
Created 10-01-2024 03:39 AM
@ayukus0705, Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
Regards,
Vidya Sargur,