Member since
06-13-2018
6
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2410 | 02-26-2019 11:34 AM |
05-25-2022
11:28 PM
@TonyQiu Sorry been away for a while. Creating a table in a kerberized cluster should be possible as hbase. I just tried it out see the flow below.. I didn't need to specifically kinit as the user hbase had a valid Kerberos ticket Switch to hbase user [root@malindi keytabs]# su - hbase
Last login: Wed May 25 21:51:56 CEST 2022 Check if kerberos ticket is valid [hbase@malindi ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1016
Default principal: hbase-jair@KENYA.KE
Valid starting Expires Service principal
05/25/2022 21:51:57 05/26/2022 21:51:57 krbtgt/KENYA.KE@KENYA.KE Connect to hbase shell as hbase [hbase@malindi ~]$ hbase-shell List existing tables hbase(main):006:0> list
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
2 row(s) in 0.6660 seconds
=> ["ATLAS_ENTITY_AUDIT_EVENTS", "atlas_titan"] Create table emp hbase(main):007:0> create 'emp', 'personal data', 'professional data'
0 row(s) in 4.9500 seconds
=> Hbase::Table - emp List tables validate emp was created hbase(main):009:0> list
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
emp
3 row(s) in 0.0130 seconds
=> ["ATLAS_ENTITY_AUDIT_EVENTS", "atlas_titan", "emp"] Describe table emp hbase(main):010:0> describe 'emp'
Table emp is ENABLED
emp
COLUMN FAMILIES DESCRIPTION
{NAME => 'personal data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS
=> '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
{NAME => 'professional data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERS
IONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
2 row(s) in 0.8770 seconds Update table hbase(main):011:0> alter 'emp', {NAME => 'metrics', BLOCKSIZE => '16384', COMPRESSION => 'SNAPPY'}
Updating all regions with the new schema...
0/1 regions updated.
1/1 regions updated.
Done.
0 row(s) in 4.9660 seconds Can you share your snippet?
... View more
02-19-2020
10:49 PM
with newer versions of spark, the sqlContext is not load by default, you have to specify it explicitly : scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc) warning: there was one deprecation warning; re-run with -deprecation for details sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@6179af64 scala> import sqlContext.implicits._ import sqlContext.implicits._ scala> sqlContext.sql("describe mytable") res2: org.apache.spark.sql.DataFrame = [col_name: string, data_type: string ... 1 more field] I'm working with spark 2.3.2
... View more
03-14-2019
07:51 AM
I've this error after rollback from SSL config "nifi Cannot replicate request to Node because the node is not connected" It's Work. Thanks.
... View more
06-14-2018
04:23 AM
Yes, already tried that but with no results. After some time, the hadoop process ends up by correctif the issue by itself but I'm trying to understand where is the difference between the two commands, they should return the same diagnosis. BR,
... View more