Member since
09-11-2015
23
Posts
25
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2463 | 06-02-2017 10:59 AM | |
1626 | 12-22-2016 04:20 PM |
05-29-2021
02:37 PM
@Onedile wrote: Yes this is possible. You need to kinit with the username that has been granted access to the SQL server DB and tables. integrated security passes your credentials to the SQL server using kerberos "jdbc:sqlserver://sername.domain.co.za:1433;integratedSecurity=true;databaseName=SCHEMA;authenticationScheme=JavaKerberos;" This worked for me. It doesn't work, it's still facing issue with the latest MSSQL JDBC driver as the Kerberos tokens are lost when the mappers spawn (as the YARN transitions the job to its internal security subsystem) 21/05/29 19:00:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1616335290043_2743822
21/05/29 19:00:40 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for c795701: HDFS_DELEGATION_TOKEN owner=c795701@XX.XXXX.XXXXXXX.COM, renewer=yarn, realUser=, issueDate=1622314832608, maxDate=1622919632608, sequenceNumber=29194128, masterKeyId=1856)]
21/05/29 19:01:15 INFO mapreduce.Job: Task Id : attempt_1616335290043_2743822_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: Integrated authentication failed. ClientConnectionId:53879236-81e7-4fc6-88b9-c7118c02e7be
Caused by: java.security.PrivilegedActionException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) Use the jtds driver as recommended here
... View more
06-11-2017
10:34 AM
Problem: Phoenix query using PERCENTILE_CONT fails with NullPointerException For example: CREATE TABLE IF NOT EXISTS P_C (
COL1 INTEGER NOT NULL PRIMARY KEY,
COL2 INTEGER
);
SELECT PERCENTILE_CONT (0.99) WITHIN GROUP (ORDER BY COL2 ASC) FROM P_C;
java.lang.NullPointerException
at org.apache.phoenix.expression.aggregator.PercentileClientAggregator.evaluate(PercentileClientAggregator.java:82)
at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:112)
at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:93)
at org.apache.phoenix.expression.aggregator.Aggregators.toBytes(Aggregators.java:109)
at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:44)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at sqlline.BufferedRows.<init>(BufferedRows.java:37)
at sqlline.SqlLine.print(SqlLine.java:1650)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292) Solution: This is BUG-82044. There is no solution as yet.
... View more
Labels:
06-11-2017
10:27 AM
1 Kudo
Problem: hbase hbck fails with the following messages zookeeper.ClientCnxn: SASL configuration failed: javax.security.auth.login.LoginException: Zookeeper client cannot authenticate using the 'Client' section of the supplied JAAS configuration: '/usr/hdp/current/hbase-client/conf/hbase_regionserver_jaas.conf' because of a RuntimeException: java.lang.SecurityException: java.io.IOException: /usr/hdp/current/hbase-client/conf/hbase_regionserver_jaas.conf (No such file or directory)
017-06-11 10:11:36,293 ERROR [main] master.TableLockManager: Unexpected ZooKeeper error when listing children
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hbase-secure/table-lock
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
Solution: Identify the location of the jaas file and set it in the HBASE_SERVER_JAAS_OPTS parameter, make sure it exists, then set the HBASE_SERVER_JAAS_OPTS parameter. grep -i jaas /etc/hbase/conf/hbase-env.sh
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx512m -Djava.security.auth.login.config=/usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf $JDK_DEPENDED_OPTS"
# ls -altr /usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf
-rw-r--r-- 1 hbase root 209 May 8 01:53 /usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf
export HBASE_SERVER_JAAS_OPTS=-Djava.security.auth.login.config=/usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf
Now re-run the hbase hbck.
... View more
Labels:
06-11-2017
10:07 AM
Problem: Running a query in Hive View in Ambari, rows 1 to 100 are shown, then the next page starts at 102. For example, first screen: Second screen: Resolution: This is bug AMBARI-19666 which is fixed in Ambari 2.4.3 and later.
... View more
Labels:
05-12-2018
01:06 PM
still issue exists?
... View more
06-11-2017
09:50 AM
Problem: Certain arithmetic operations return the wrong result in Phoenix. For example: 0: jdbc:phoenix:> select 2/4 ;
+----+
| 0 |
+----+
| 0 |
+----+
1 row selected (0.014 seconds)
Cause: This is because the result is returned as an int rather than a decimal See PHOENIX-3312
Workaround: Provide decimal values within the SQL e.g. 0: jdbc:phoenix:> select 2.0/4.0 ;
+------+
| 0.5 |
+------+
| 0.5 |
+------+
1 row selected (0.011 seconds)
... View more
Labels:
06-11-2017
09:48 AM
Question: Are there any functions which allow date arithmetic with phoenix?
Answer: this is done as a fraction of a day.
e.g. to add 12 hours to a day:
> select now(), now() + (0.5) ;
+---------------------------------+---------------------------------+
| DATE '2017-06-04 16:55:07.989' | DATE '2017-06-05 04:55:07.989' |
+---------------------------------+---------------------------------+
| 2017-06-04 16:55:07.989 | 2017-06-05 04:55:07.989 |
+---------------------------------+---------------------------------+
To add 15 minutes you would use (15/1440). ie.
0: jdbc:phoenix:> select now(), now() + (0.010416666666667) ;
+---------------------------------+---------------------------------+
| DATE '2017-06-04 16:56:31.492' | DATE '2017-06-04 17:11:31.492' |
+---------------------------------+---------------------------------+
| 2017-06-04 16:56:31.492 | 2017-06-04 17:11:31.492 |
+---------------------------------+---------------------------------+
1 row selected (0.024 seconds)
See also:
RMP-9148 "Provide Date arithmetic functions for Phoenix"
... View more
Labels:
03-28-2017
08:44 AM
3 Kudos
Assuming you start with a kerberized HDP cluster with Hbase installed. First check what your hbase service principal is i.e. klist -kt /etc/security/keytabs/hbase.service.keytab
Keytab name: FILE:hbase.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
In Ambari head to Hbase -> Configs -> Advanced -> Custom Hbase-Site.xml and add the following new parameters with the keytab / principal substituted for your environment. If they already exist for your cluster set the values as indicated: hbase.rest.authentication.type=kerberos
hbase.master.kerberos.principal=hbase/_HOST@HDP.COM
hbase.master.keytab.file=/etc/security/keytabs/hbase.service.keytab
hadoop.proxyuser.HTTP.groups=*
hadoop.proxyuser.HTTP.hosts=*
hbase.security.authorization=true
hbase.rest.authentication.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
hbase.rest.authentication.kerberos.principal=HTTP/_HOST@HDP.COM
hbase.security.authentication=kerberos
hbase.rest.kerberos.principal=hbase/_HOST@HDP.COM
hbase.rest.keytab.file=/etc/security/keytabs/hbase.service.keytab
In Ambari -> HDFS, confirm that the following are set and if not add them to 'Custom core-site.xml' hadoop.proxyuser.HTTP.groups=*
hadoop.proxyuser.HTTP.hosts=*
Restart the affected HBase & HDFS services. On the command line on the HBase master, kinit with the service keytab and start the REST server: su - hbase
kinit -kt hbase.service.keytab hbase/hdp253k1.hdp@HDP.COM
/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start rest -p 17000 --infoport 17050
Test the REST server without / with a ticket as follows: # kdestroy
# klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)
# curl --negotiate -u : 'http://hdp253k1.hdp:17000/status/cluster'
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
# kinit -kt hbase.service.keytab hbase/hdp253k1.hdp@HDP.COM
# curl --negotiate -u : 'http://aw253k1:17000/status/cluster'
3 live servers, 0 dead servers, 10.6667 average load
3 live servers
hdp253k1.hdp:16020 1490688381983
requests=0, regions=11
heapSizeMB=120 maxHeapSizeMB=502
... View more
Labels:
03-17-2017
03:58 PM
Depends on whether we are talking about under HDFS or at the OS level, but the answer to both is Yes... For accounts under HDFS, the home directory is determined by the value of dfs.user.home.dir.prefix. see: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml For OS users you can pre-create your service users before install, then set ignore_groupsusers_create=true in cluster-env.xml, and then launch the installer. The service users and groups are here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/create-system-users-and-groups.html
... View more
Labels:
03-13-2017
11:33 PM
1 Kudo
Q: Is there any way to change the user my hive jobs run as under LLAP? They always seem to run as 'hive' user. A: Hive LLAP does not currently support hive.server2.enable.doAs=true. All sessions will run under the hive account.
... View more
Labels: