Member since
03-01-2016
104
Posts
97
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1666 | 06-03-2018 09:22 PM | |
28161 | 05-21-2018 10:31 PM | |
2172 | 10-19-2016 07:13 AM |
02-14-2018
11:14 PM
PROBLEM: There are multiple issues connecting Windows clients such as Squirrel client to Phoenix / Hbase on a cluster enabled with Kerberos.
SOLUTION: Please follow below working steps.
1. Create Jaas configuration file hbase.jaas on client machine.
Client { com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false useTicketCache=true
renewTicket=true serviceName="zookeeper";
keytab="" principal="" };
2. Copy kerberos config file from cluster nodes to local client machine (krb5.conf/krb5.ini) 3. Add below java options in Squirrel launcher script. [/Applications/SQuirreLSQL.app/Contents/MacOS/squirrel-sql.sh ]
-Djava.security.auth.login.config="/Users/gsharma/hbase/hbase.jaas"
-Djava.security.krb5.conf="/Users/gsharma/hbase/krb5.conf"
e.g
$JAVACMD -Xmx256m -cp "$CP" -Djava.security.auth.login.config="/Users/gsharma/hbase/hbase.jaas" -Djava.security.krb5.conf="/Users/gsharma/hbase/krb5.conf" $MACOSX_SQUIRREL_PROPS -splash:"$SQUIRREL_SQL_HOME/icons/splash.jpg" net.sourceforge.squirrel_sql.client.Main --log-config-file "$UNIX_STYLE_HOME"/log4j.properties --squirrel-home "$UNIX_STYLE_HOME" $NATIVE_LAF_PROP $SCRIPT_ARGS
4. Download phoenix driver jar file [phoenix-version-client.jar] . 5. Download hdfs-site.xml,hbase-site.xml,core-site.xml files from hbase server to local client folder. 7. Open Squirrel UI and register Phoenix driver. (Put example url - jdbc:phoenix:1.openstacklocal:2181:/hbase-secure:hbase/5.openstacklocal@EXAMPLE.COM:/Users/gsharma/hbase/hbase.service.keytab) 8. Now create alias to connect to Hbase in squirrel UI using registered driver.
jdbc url example : jdbc:phoenix:1.openstacklocal:2181:/hbasesecure:hbase/5.openstacklocal@EXAMPLE.COM:/Users/gsharma/hbase/hbase.service.keytab
Please note phoenix does not support windows path in keytab file path. So if we have keytab file under C:\Users\Hbase\hbase.service.keytab, we can use "/Users/Hbase/hbase.service.keytab" in JDBC URL. 9. Check if connection is successful.
... View more
Labels:
02-14-2018
11:01 PM
PROBLEM: Created a Phoenix Table on top of existing Hbase table, the phoenix table is not working as expected or is not needed now, need to delete Phoenix table but not the existing Hbase table as it has valid data. CONSTRAINT : Simple dropping of table from Phoenix shell would delete Hbase table too. SOLUTION : Step 1: Delete Phoenix table entry from SYSTEM.CATALOG as follows : delete from SYSTEM.CATALOG where TABLE_NAME='table_name';
Step 2 : Restart the region server which hosts SYSTEM.CATALOG to clear metadata cache which may still not allow creating same named table from Phoenix. Note : This approach may not work well if there are indexes or views on top of this table as we would end up corrupting catalog table. You may also try snapshotting existing Hbase table , dropping the table completely from Phoenix and then cloning / restoring the Hbase table from snapshot.
... View more
Labels:
06-04-2017
09:33 AM
1 Kudo
PROBLEM: HDP 2.5.3 with Ambari 2.4.2.0 and using Kerberos and Ranger for HBase authorization. We need grant pretty much ALL permissions to the 'default' namespace to every user so they can connect using sqlline.py. 1;31mError: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'abc@NA.EXAMPLE.COM' (action=create)
at org.apache.ranger.authorization.hbase.AuthorizationSession.publishResults(AuthorizationSession.java:261)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.authorizeAccess(RangerAuthorizationCoprocessor.java:595)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.requirePermission(RangerAuthorizationCoprocessor.java:664)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preCreateTable(RangerAuthorizationCoprocessor.java:769)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preCreateTable(RangerAuthorizationCoprocessor.java:496)
at org.apache.hadoop.hbase.master.MasterCoprocessorHost$11.call(MasterCoprocessorHost.java:222)
at org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1146)
at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTable(MasterCoprocessorHost.java:218)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1603)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:462)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:57204)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)[m
org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'abc@NA.EXAMPLE.COM' (action=create)
EXPECTED BEHAVIOR : Once the phoenix SYSTEM tables are created , only Read permission on the 'default' namespace should have allowed the user to connect using sqlline.py ROOT CAUSE : Phoenix is using HBaseAdmin.getTableDescriptor during most of the checks for valid version of Phoenix. But this function requires CREATE or ADMIN permissions. This is a known issue and tracked in PHOENIX-3652 . Fix is available in Phoenix 4.8.3 and 4.10 SOLUTION: To get a hotfix backported to previous Phoenix versions, please log a case with HWX.
... View more
Labels:
06-04-2017
08:59 AM
STEPS TO REPRODUCE: 1. Create a Phoenix connection 2. CREATE SCHEMA test 3. DROP SCHEMA test 4. Now create the test schema again. This will throw below exceptions : 0: jdbc:phoenix:> create schema "test";
Error: ERROR 721 (42M04): Schema with given name already exists schemaName=test (state=42M04,code=721) org.apache.phoenix.schema.SchemaAlreadyExistsException: ERROR 721 (42M04): Schema with given name already exists schemaName=test 5. Now attempt to drop the test schema again: 0: jdbc:phoenix:> drop schema "test";Error: ERROR 722(43M05):Schema does not exists schemaName=test (state=43M05,code=722) org.apache.phoenix.schema.SchemaNotFoundException: ERROR 722(43M05):Schema does not exists schemaName=test If you look in Hbase and list_namespace, you will see the test schema still exists. ROOT CAUSE : This is a known issue discussed in Apache JIRA PHOENIX-3694 SOLUTION : Issue is fixed in Phoenix 4.10. For a backport hotfix , please log a case with HWX.
... View more
Labels:
06-04-2017
08:08 AM
PROBLEM: While creating an external Hive table using Hive Storage Handler for Phoenix, the table creation fails if schema name is added along with Phoenix table name. STEPS TO REPRODUCE: 1. Hive DDL for the table: create external table test.hive_e_phoenix1 (
scd string,
scddesc string
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
'phoenix.table.name'='myschema.mytbl',
'phoenix.zookeeper.quorum'='node1,node2,node3',
'phoenix.rowkeys'='scd'
); 2. Following exceptions are seen. 2016-07-17 16:02:25,898 ERROR [HiveServer2-Background-Pool: Thread-202]: operation.Operation (SQLOperation.java:run(209)) - Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Phoenix table bigstats.bigstats_cd doesn't exist)
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Phoenix table myschema.mytbl doesn't exist)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:720)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4135)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
... 11 more
Caused by: MetaException(message:Phoenix table myschema.mytbl doesn't exist)
at org.apache.phoenix.hive.PhoenixMetaHook.preCreateTable(PhoenixMetaHook.java:63)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy6.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:714)
... 21 more
ROOT CAUSE: This is a known issue with Phoenix 4.8 discussed in Apache JIRA PHOENIX-3078 SOLUTION : Please log a case with HWX and request for a hotfix for HDP 2.5 backported from HDP 2.6 REFERENCE: https://issues.apache.org/jira/browse/PHOENIX-3078
... View more
Labels:
06-04-2017
12:36 AM
PROBLEM: Phoenix ODBC driver strips out "hint" part of "Select" statements. ODBC driver logs looks like below : Mar2016:15:00.601 INFO 6380Statement::SQLSetStmtAttrW:Attribute: SQL_ATTR_MAX_ROWS (1) ----Comments: the original query passed in has "hint" Mar2016:15:00.602 INFO 6380StatementState::InternalPrepare:Preparing query:select/*+ INDEX(c.more_xref_cad more_xref_acct_idx) */*from c.more_xref_cad where cad_acct_id =219980018 Mar2016:15:00.602 DEBUG 6380RESTAction::HMDebugCallback:Infor type: CURLINFO_TEXT ..... Mar2016:15:00.802 DEBUG 6380RESTAction::HMDebugCallback:Info data:Connected to localhost (127.0.0.1) port 8765(#1) Mar2016:15:00.802 DEBUG 6380RESTAction::HMDebugCallback:Infor type: CURLINFO_HEADER_OUT Mar2016:15:00.802 DEBUG 6380RESTAction::HMDebugCallback:Info data: POST / HTTP/1.1Host: localhost:8765Content-Type: application/octet-stream Accept: text/html, image/gif, image/jpeg,*; q=.2,*/*; q=.2 User-Agent: Phoenix ODBC Connection: keep-alive Content-Length: 160 Mar 20 16:15:00.803 DEBUG 6380 RESTAction::HMDebugCallback: Infor type: CURLINFO_DATA_OUT ----Comments: the query generated and submited to PQS has no "hint" part. Mar 20 16:15:00.803 DEBUG 6380 RESTAction::HMDebugCallback: Info data: {"request":"prepare","connectionId":"2166b30f-1bf8-1f9d-309e-4009877a1a62","sql":"SELECT * FROM c.more_xref_cad WHERE cad_acct_id = 219980018","maxRowCount":-1} Mar 20 16:15:00.803 DEBUG 6380 RESTAction::HMDebugCallback: Infor type: CURLINFO_TEXT Mar 20 16:15:00.803 DEBUG 6380 RESTAction::HMDebugCallback: Info data: upload completely sent off: 160 out of 160 bytes
ROOT CAUSE : As per Simba team,In the ODBC driver, there is a component that removes ODBC escape sequences such as {fn SIN(col_1)} and turns it into SIN(col_1). The reason for this is because Phoenix does not support such escape sequence but BI tools emits them. The problem here is that the component that removes escape sequence also removes the hint as it is currently being treated as comments. SOLUTION : Phoenix ODBC Driver GA (v1.0.6.1008) has the fix. Please raise a support case with HWX if you need additional assistance here.
... View more
Labels:
06-04-2017
12:09 AM
PROBLEM : Inner query does not recognize alias for the table and consider it as column family. Following exceptions are seen: 0: jdbc:phoenix:localhost:2181> SELECT a.tgb_id FROM "APC:TGB_CURRENT_STATUS" a WHERE a.utc_date_epoch in (SELECT b.utc_date_epoch FROM "APC:TGB_STATUS" b WHERE a.tgb_id = b.tgb_id ORDER BY b.utc_date_epoch DESC limit 2)
. . . . . . . . . . . . . . . > ;
Error: ERROR 1001 (42I01): Undefined column family. familyName=A.null (state=42I01,code=1001)
org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): Undefined column family. familyName=A.null
at org.apache.phoenix.schema.PTableImpl.getColumnFamily(PTableImpl.java:724)
at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:365) STEPS TO REPRODUCE : 1. Create two tables from Phoenix with below schema CREATE TABLE "${namespace}:TGB_STATUS" (
tgb_id integer not null,
utc_date_epoch integer not null,
tgb_name varchar(40),
city_name varchar(25),
tgb_type_description varchar(64),
local_date_key integer,
utc_date_key integer,
enclosure_type integer,
door_tamper_flag boolean,
over_temp_flag boolean,
under_temp_flag boolean,
voltage_fault_flag boolean,
power_fail_flag boolean,
battery_low_flag boolean,
major_rectifier_alarm_flag boolean,
minor_rectifier_alarm_flag boolean,
env_fault_alarm_flag boolean,
fusion_proc_load integer,
fusion_proc_uptime integer,
pcc_installed_flag boolean,
vswr_alarm_flag boolean,
pcc_power_supply_flag boolean,
forward_power integer,
reflected_power integer,
alarm_bits integer,
alarm_bits_text varchar(8),
slot0_temperature integer,
slot0_channel integer,
slot0_active boolean,
slot1_temperature integer,
slot1_channel integer,
slot1_active boolean,
slot2_temperature integer,
slot2_channel integer,
slot2_active boolean,
slot3_temperature integer,
slot3_channel integer,
slot3_active boolean,
slot4_temperature integer,
slot4_channel integer,
slot4_active boolean,
slot5_temperature integer,
slot5_channel integer,
slot5_active boolean,
slot6_temperature integer,
slot6_channel integer,
slot6_active boolean,
slot7_temperature integer,
slot7_channel integer,
slot7_active boolean,
slot8_temperature integer,
slot8_channel integer,
slot8_active boolean,
slot9_temperature integer,
slot9_channel integer,
slot9_active boolean,
CONSTRAINT pk_tgb_status PRIMARY KEY (tgb_id ASC, utc_date_epoch DESC)
)SALT_BUCKETS = ${tgb_status};
And CREATE TABLE "${namespace}:TGB_CURRENT_STATUS" (
tgb_id integer not null,
utc_date_epoch integer,
tgb_name varchar(40),
city_name varchar(25),
tgb_type_description varchar(64),
local_date_key integer,
utc_date_key integer,
enclosure_type integer,
door_tamper_flag boolean,
over_temp_flag boolean,
under_temp_flag boolean,
voltage_fault_flag boolean,
power_fail_flag boolean,
battery_low_flag boolean,
major_rectifier_alarm_flag boolean,
minor_rectifier_alarm_flag boolean,
env_fault_alarm_flag boolean,
fusion_proc_load integer,
fusion_proc_uptime integer,
pcc_installed_flag boolean,
vswr_alarm_flag boolean,
pcc_power_supply_flag boolean,
forward_power integer,
reflected_power integer,
alarm_bits integer,
alarm_bits_text varchar(8),
slot0_temperature integer,
slot0_channel integer,
slot0_active boolean,
slot1_temperature integer,
slot1_channel integer,
slot1_active boolean,
slot2_temperature integer,
slot2_channel integer,
slot2_active boolean,
slot3_temperature integer,
slot3_channel integer,
slot3_active boolean,
slot4_temperature integer,
slot4_channel integer,
slot4_active boolean,
slot5_temperature integer,
slot5_channel integer,
slot5_active boolean,
slot6_temperature integer,
slot6_channel integer,
slot6_active boolean,
slot7_temperature integer,
slot7_channel integer,
slot7_active boolean,
slot8_temperature integer,
slot8_channel integer,
slot8_active boolean,
slot9_temperature integer,
slot9_channel integer,
slot9_active boolean,
CONSTRAINT pk_tgb_current_status PRIMARY KEY (tgb_id ASC)
);
2. Run the following query. SELECT a.tgb_id FROM "APC:TGB_CURRENT_STATUS" a WHERE a.utc_date_epoch in (SELECT b.utc_date_epoch FROM "APC:TGB_STATUS" b WHERE a.tgb_id = b.tgb_id ORDER BY b.utc_date_epoch DESC limit 2)
3. Try removing the alias in inner query as well : SELECT a.tgb_id FROM "APC:TGB_CURRENT_STATUS" a WHERE a.utc_date_epoch in (SELECT b.utc_date_epoch FROM "APC:TGB_STATUS" b WHERE "APC:TGB_CURRENT_STATUS".tgb_id = b.tgb_id ORDER BY b.utc_date_epoch DESC limit 2) ROOT CAUSE: This is a known issue reported in internal BUG-78975 with no workaround as of now.
... View more
Labels:
06-03-2017
11:36 PM
PROBLEM: When doing a select of a relatively large table (a few thousand rows) some rows return partially missing.When increasing the filter to return those specific rows, the values appear as expected. STEPS TO REPRODUCE: 1. Create a table CREATE TABLE IF NOT EXISTS TEST (
BUCKET VARCHAR,
TIMESTAMP_DATE TIMESTAMP,
TIMESTAMP UNSIGNED_LONG NOT NULL,
SRC VARCHAR,
DST VARCHAR,
ID VARCHAR,
ION VARCHAR,
IC BOOLEAN NOT NULL,
MI UNSIGNED_LONG,
AV UNSIGNED_LONG,
MA UNSIGNED_LONG,
CNT UNSIGNED_LONG,
DUMMY VARCHAR
CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP DESC, SRC, DST, ID, ION, IC)
); 2. Use a python script to generate a CSV with 5000 rows for i in xrange(5000):
print "5SEC,2016-07-21 07:25:35.{i},146908593500{i},WWWWWWWW,AAA,BBBB,CCCCCCCC,false,{i}1181000,1788000{i},2497001{i},{i},aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa{i}".format(i=i) 3. Bulk inserting the csv in the table phoenix/bin/psql.py localhost -t TEST large.csv
4. Please see that one row that contains no TIMESTAMP_DATE and null values in MI and MA 0: jdbc:phoenix:localhost:2181> select * from TEST
....
+---------+--------------------------+-------------------+-----------+------+-------+-----------+--------+--------------+--------------+--------------+-------+----------------------------------------------------------------------------+
| BUCKET | TIMESTAMP_DATE | TIMESTAMP | SRC | DST | ID | ION | IC | MI | AV | MA | CNT | DUMMY |
+---------+--------------------------+-------------------+-----------+------+-------+-----------+--------+--------------+--------------+--------------+-------+----------------------------------------------------------------------------+
| 5SEC | 2016-07-21 07:25:35.100 | 1469085935001000 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | 10001181000 | 17880001000 | 24970011000 | 1000 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa1000 |
| 5SEC | 2016-07-21 07:25:35.999 | 146908593500999 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | 9991181000 | 1788000999 | 2497001999 | 999 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa999 |
| 5SEC | 2016-07-21 07:25:35.998 | 146908593500998 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | 9981181000 | 1788000998 | 2497001998 | 998 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa998 |
| 5SEC | | 146908593500997 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | null | 1788000997 | null | 997 | |
| 5SEC | 2016-07-21 07:25:35.996 | 146908593500996 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | 9961181000 | 1788000996 | 2497001996 | 996 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa996 |
| 5SEC | 2016-07-21 07:25:35.995 | 146908593500995 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | 9951181000 | 1788000995 | 2497001995 | 995 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa995 |
| 5SEC | 2016-07-21 07:25:35.994 | 146908593500994 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | 9941181000 | 1788000994 | 2497001994 | 994 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa994 |
.... 5. When selecting that row specifically the values are correct 0: jdbc:phoenix:localhost:2181> select * from TEST where timestamp = 146908593500997;
+---------+--------------------------+------------------+-----------+------+-------+-----------+--------+-------------+-------------+-------------+------+---------------------------------------------------------------------------+
| BUCKET | TIMESTAMP_DATE | TIMESTAMP | SRC | DST | ID | ION | IC | MI | AV | MA | CNT | DUMMY |
+---------+--------------------------+------------------+-----------+------+-------+-----------+--------+-------------+-------------+-------------+------+---------------------------------------------------------------------------+
| 5SEC | 2016-07-21 07:25:35.997 | 146908593500997 | WWWWWWWW | AAA | BBBB | CCCCCCCC | false | 9971181000 | 1788000997 | 2497001997 | 997 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa997 |
+---------+--------------------------+------------------+-----------+------+-------+-----------+--------+-------------+-------------+-------------+------+---------------------------------------------------------------------------+
1 row selected (0.159 seconds) SOLUTION : This a known issue and is unresolved as of now. Please track it under PHOENIX-3112. WORKAROUND: Try increasing value of "hbase.client.scanner.max.result.size" which helped in many cases. But it has its own side effects of inducing memory pressure. REFERENCES: https://issues.apache.org/jira/browse/PHOENIX-3112
... View more
Labels:
06-03-2017
10:55 PM
PROBLEM : Customers create namespace and assign users "RX" permission , however users are not able to create tables in namespace and require "RWX" permission. CREATE TABLE example_datapipe.table_test1 ( id BIGINT not null primary key); Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=g0123434@ABC.EXAMPLE.COM, scope=SYSTEM:CATALOG, family=0:SALT_BUCKETS, params=[table=SYSTEM:CATALOG,family=0:SALT_BUCKETS],action=WRITE)
ROOT CAUSE : Yes this is an expected behavior. To create a Phoenix table, you MUST have the ability to create the HBase table and you MUST have the ability to write to the SYSTEM.CATALOG table. WORKAROUND : If you do not want normal users to have write access on system.catalog, try one of the following alternatives :- 1) Let an admin manually create the Phoenix tables for the unprivileged users. 2) Temporarily grant them write access to the SYSTEM.CATALOG table and revoke later. REFERENCE: Internal discussion under EAR-5831
... View more
Labels:
06-03-2017
10:19 PM
STEPS TO REPRODUCE: 1. Setup table and local index: create table mytest (col1 varchar, col2 varchar constraint pk primary key (col1));
upsert into mytest values ('123','123');
upsert into mytest values ('456','456');
create local index myidx on mytest (col2); 2. Obtain current timestamp and prove rows can be seen: select cast(current_time() as bigint) from mytest limit 1;
+--------------------------------------------+
| TO_BIGINT(TIME '2017-04-19 09:57:47.867') |
+--------------------------------------------+
| 1492595867867 |
+--------------------------------------------+
sqlline.py "aw25k-3:2181/hbase-secure;currentSCN=1492595867867"
select * from mytest;
+-------+-------+
| COL1 | COL2 |
+-------+-------+
| 123 | 123 |
| 456 | 456 |
+-------+-------+ 3. Disconnect sqlline and reconnect without SCN. Drop the local index, insert more rows, recreate the index: sqlline.py
drop index myidx on mytest;
upsert into mytest values ('678','678');
upsert into mytest values ('456','654');
create local index myidx on mytest (col2); 4. Re-attempt flashback query: sqlline.py "example-3:2181/hbase-secure;currentSCN=1492595867867"
select * from mytest;
+-------+-------+
| COL1 | COL2 |
+-------+-------+
+-------+-------+
No rows selected (0.18 seconds)
select * from mytest where col2 = '123'
+-------+-------+
| COL1 | COL2 |
+-------+-------+
+-------+-------+
No rows selected (0.18 seconds)
SELECT /*+NO_INDEX*/ * from mytest;
select * from mytest;
+-------+-------+
| COL1 | COL2 |
+-------+-------+
| 123 | 123 |
| 456 | 456 |
+-------+-------+ ROOT CAUSE : This is because we create data for new indexes with the latest timestamp as in data table.Fix includes to create initial index also with the timestamp of the data. (at least for local index). SOLUTION: Fix is available in next maintenance release of HDP 2.6. Request for a hotfix by a support case if you are on an earlier version. REFERENCE: Internal BUG-79529
... View more
Labels: